Test Report: Docker_Linux_crio_arm64 21772

                    
                      efb80dd6659b26178e36f8b49f3cb836e30a0156:2025-10-19:41980
                    
                

Test fail (36/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.73
35 TestAddons/parallel/Registry 14.59
36 TestAddons/parallel/RegistryCreds 0.66
37 TestAddons/parallel/Ingress 146.35
38 TestAddons/parallel/InspektorGadget 5.35
39 TestAddons/parallel/MetricsServer 5.37
41 TestAddons/parallel/CSI 45.87
42 TestAddons/parallel/Headlamp 3.28
43 TestAddons/parallel/CloudSpanner 6.28
44 TestAddons/parallel/LocalPath 9.44
45 TestAddons/parallel/NvidiaDevicePlugin 6.31
46 TestAddons/parallel/Yakd 5.45
98 TestFunctional/parallel/ServiceCmdConnect 603.85
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.2
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.16
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.11
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
144 TestFunctional/parallel/ServiceCmd/DeployApp 600.92
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
154 TestFunctional/parallel/ServiceCmd/Format 0.44
155 TestFunctional/parallel/ServiceCmd/URL 0.39
191 TestJSONOutput/pause/Command 2.05
197 TestJSONOutput/unpause/Command 1.41
261 TestPause/serial/Pause 8.44
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.64
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.77
310 TestStartStop/group/old-k8s-version/serial/Pause 8.12
316 TestStartStop/group/no-preload/serial/Pause 7
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.99
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.54
332 TestStartStop/group/embed-certs/serial/Pause 8.98
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.53
343 TestStartStop/group/default-k8s-diff-port/serial/Pause 8.64
347 TestStartStop/group/newest-cni/serial/Pause 7.57
x
+
TestAddons/serial/Volcano (0.73s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-694780 addons disable volcano --alsologtostderr -v=1: exit status 11 (731.919697ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:16:44.795600  301281 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:16:44.797096  301281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:16:44.797112  301281 out.go:374] Setting ErrFile to fd 2...
	I1019 12:16:44.797118  301281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:16:44.797432  301281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:16:44.797974  301281 mustload.go:65] Loading cluster: addons-694780
	I1019 12:16:44.798391  301281 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:16:44.798412  301281 addons.go:606] checking whether the cluster is paused
	I1019 12:16:44.798598  301281 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:16:44.798642  301281 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:16:44.799145  301281 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:16:44.816173  301281 ssh_runner.go:195] Run: systemctl --version
	I1019 12:16:44.816485  301281 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:16:44.834850  301281 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:16:44.940155  301281 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:16:44.940306  301281 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:16:44.972171  301281 cri.go:89] found id: "babbcf90f6ac904ad2f1c25a59f3fae6037578ff6c985c97854cc8e67861c441"
	I1019 12:16:44.972238  301281 cri.go:89] found id: "4af26279aa6f2eea07df4d6dc9cb321abc82026e12a909ecae07903da4702995"
	I1019 12:16:44.972258  301281 cri.go:89] found id: "dbc7b2d7b48c237972a607237cc3947e59c7b5443b011c68106e9c392f0d975d"
	I1019 12:16:44.972279  301281 cri.go:89] found id: "53d99b9c1fa5a95c9e37f1a45f62cd51c1afdda905ffeb6b9248b13034343462"
	I1019 12:16:44.972296  301281 cri.go:89] found id: "1159fff2343a5c3e477dab117e1ff2e1a6416a99cdfb9f1705fbd592646d9832"
	I1019 12:16:44.972315  301281 cri.go:89] found id: "976c559427e0253107c6466d60d473a0039bdf7878194ad5bdaca6966253b26b"
	I1019 12:16:44.972343  301281 cri.go:89] found id: "4e8fe40f4a508cc1d1ac055b2b1bf2c19b1903cd3d5775fc32b7874ac809c0d8"
	I1019 12:16:44.972364  301281 cri.go:89] found id: "3c758f6c5602f3f9b9443ccc165652180df691ad854e4d71ce3f716ff6f9a39b"
	I1019 12:16:44.972384  301281 cri.go:89] found id: "c93fad6f2f68178d142a7ba603152834d1f7d544574f809291adbea8ae600e2a"
	I1019 12:16:44.972408  301281 cri.go:89] found id: "d66a0ce31c46f79abdc4cf890ad0bf9a061e0e382b03fc34c6d7bddbfe74e583"
	I1019 12:16:44.972435  301281 cri.go:89] found id: "019ec1d7cee73b30cbc0eb97d1a28afba7149627fff4e81ca5ad784b17e42ce6"
	I1019 12:16:44.972457  301281 cri.go:89] found id: "ad7a2781a873fe6c6ec31e43f52230ed09385cd447ef4cbd60561041e64afaaf"
	I1019 12:16:44.972477  301281 cri.go:89] found id: "80882ef14df043e6e23e23bb0ae867fdf8b865123d2ff32882a7c44cffea2388"
	I1019 12:16:44.972497  301281 cri.go:89] found id: "795c9019de22203a4870134a91ae2e2344e2a0d9a3c45ee2ca515e2465ef1af7"
	I1019 12:16:44.972516  301281 cri.go:89] found id: "1a89d3feb3cc173765597de5bc7c4a783544a76def605ad64c02aba17ef45ca3"
	I1019 12:16:44.972538  301281 cri.go:89] found id: "c1af9139ef29a1d92c70afefa4ebf2ccc782581c328281ec4e2f86b553c3c467"
	I1019 12:16:44.972573  301281 cri.go:89] found id: "c10333b42245b14943c5c33809857b909c2a03945bf30eedb9643814fdd3b23d"
	I1019 12:16:44.972602  301281 cri.go:89] found id: "0e8ae7e9978df10dd5c1ae839fb322082252d2948bb1e640b22d86f207cac350"
	I1019 12:16:44.972624  301281 cri.go:89] found id: "1fbbdaf72898fb8d9d32b6836dde4d8c8bd3aeb32b5e40d0a08e758f67f5eeb9"
	I1019 12:16:44.972643  301281 cri.go:89] found id: "20700ce554fdeeb461937fe8bd8c17a66655f95c7782ad23f8855f6fc85e921d"
	I1019 12:16:44.972665  301281 cri.go:89] found id: "ebc110500cd3df83646f04053eb6ac2cb475cfd7069d77e04732e6c38ee16e85"
	I1019 12:16:44.972693  301281 cri.go:89] found id: "4b12dbb5293748cac62f0aa74605c7890efe62f72b75cd8622373e2ae02a2e7a"
	I1019 12:16:44.972716  301281 cri.go:89] found id: "974f057716664d84b595f63044c6aaf6d840e979157a7453177950977adff06a"
	I1019 12:16:44.972735  301281 cri.go:89] found id: ""
	I1019 12:16:44.972809  301281 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:16:44.987847  301281 out.go:203] 
	W1019 12:16:44.990747  301281 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:16:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:16:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:16:44.990768  301281 out.go:285] * 
	* 
	W1019 12:16:45.432644  301281 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:16:45.435685  301281 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-694780 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.73s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 15.256175ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-cz995" [13fda06a-9f49-47a0-9b61-d3a6269e5357] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004401881s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-4r8wk" [0e1da561-db9d-4edf-ada6-d637df7913be] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003468921s
addons_test.go:392: (dbg) Run:  kubectl --context addons-694780 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-694780 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-694780 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.03307151s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 ip
2025/10/19 12:17:10 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-694780 addons disable registry --alsologtostderr -v=1: exit status 11 (294.765131ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:17:10.068630  302211 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:17:10.069577  302211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:17:10.069618  302211 out.go:374] Setting ErrFile to fd 2...
	I1019 12:17:10.069642  302211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:17:10.070000  302211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:17:10.070350  302211 mustload.go:65] Loading cluster: addons-694780
	I1019 12:17:10.070799  302211 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:17:10.070849  302211 addons.go:606] checking whether the cluster is paused
	I1019 12:17:10.070985  302211 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:17:10.071033  302211 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:17:10.071548  302211 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:17:10.091531  302211 ssh_runner.go:195] Run: systemctl --version
	I1019 12:17:10.091588  302211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:17:10.111976  302211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:17:10.230169  302211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:17:10.230268  302211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:17:10.278379  302211 cri.go:89] found id: "babbcf90f6ac904ad2f1c25a59f3fae6037578ff6c985c97854cc8e67861c441"
	I1019 12:17:10.278404  302211 cri.go:89] found id: "4af26279aa6f2eea07df4d6dc9cb321abc82026e12a909ecae07903da4702995"
	I1019 12:17:10.278409  302211 cri.go:89] found id: "dbc7b2d7b48c237972a607237cc3947e59c7b5443b011c68106e9c392f0d975d"
	I1019 12:17:10.278413  302211 cri.go:89] found id: "53d99b9c1fa5a95c9e37f1a45f62cd51c1afdda905ffeb6b9248b13034343462"
	I1019 12:17:10.278417  302211 cri.go:89] found id: "1159fff2343a5c3e477dab117e1ff2e1a6416a99cdfb9f1705fbd592646d9832"
	I1019 12:17:10.278420  302211 cri.go:89] found id: "976c559427e0253107c6466d60d473a0039bdf7878194ad5bdaca6966253b26b"
	I1019 12:17:10.278423  302211 cri.go:89] found id: "4e8fe40f4a508cc1d1ac055b2b1bf2c19b1903cd3d5775fc32b7874ac809c0d8"
	I1019 12:17:10.278426  302211 cri.go:89] found id: "3c758f6c5602f3f9b9443ccc165652180df691ad854e4d71ce3f716ff6f9a39b"
	I1019 12:17:10.278431  302211 cri.go:89] found id: "c93fad6f2f68178d142a7ba603152834d1f7d544574f809291adbea8ae600e2a"
	I1019 12:17:10.278441  302211 cri.go:89] found id: "d66a0ce31c46f79abdc4cf890ad0bf9a061e0e382b03fc34c6d7bddbfe74e583"
	I1019 12:17:10.278453  302211 cri.go:89] found id: "019ec1d7cee73b30cbc0eb97d1a28afba7149627fff4e81ca5ad784b17e42ce6"
	I1019 12:17:10.278457  302211 cri.go:89] found id: "ad7a2781a873fe6c6ec31e43f52230ed09385cd447ef4cbd60561041e64afaaf"
	I1019 12:17:10.278460  302211 cri.go:89] found id: "80882ef14df043e6e23e23bb0ae867fdf8b865123d2ff32882a7c44cffea2388"
	I1019 12:17:10.278464  302211 cri.go:89] found id: "795c9019de22203a4870134a91ae2e2344e2a0d9a3c45ee2ca515e2465ef1af7"
	I1019 12:17:10.278467  302211 cri.go:89] found id: "1a89d3feb3cc173765597de5bc7c4a783544a76def605ad64c02aba17ef45ca3"
	I1019 12:17:10.278482  302211 cri.go:89] found id: "c1af9139ef29a1d92c70afefa4ebf2ccc782581c328281ec4e2f86b553c3c467"
	I1019 12:17:10.278486  302211 cri.go:89] found id: "c10333b42245b14943c5c33809857b909c2a03945bf30eedb9643814fdd3b23d"
	I1019 12:17:10.278490  302211 cri.go:89] found id: "0e8ae7e9978df10dd5c1ae839fb322082252d2948bb1e640b22d86f207cac350"
	I1019 12:17:10.278493  302211 cri.go:89] found id: "1fbbdaf72898fb8d9d32b6836dde4d8c8bd3aeb32b5e40d0a08e758f67f5eeb9"
	I1019 12:17:10.278496  302211 cri.go:89] found id: "20700ce554fdeeb461937fe8bd8c17a66655f95c7782ad23f8855f6fc85e921d"
	I1019 12:17:10.278501  302211 cri.go:89] found id: "ebc110500cd3df83646f04053eb6ac2cb475cfd7069d77e04732e6c38ee16e85"
	I1019 12:17:10.278504  302211 cri.go:89] found id: "4b12dbb5293748cac62f0aa74605c7890efe62f72b75cd8622373e2ae02a2e7a"
	I1019 12:17:10.278507  302211 cri.go:89] found id: "974f057716664d84b595f63044c6aaf6d840e979157a7453177950977adff06a"
	I1019 12:17:10.278510  302211 cri.go:89] found id: ""
	I1019 12:17:10.278566  302211 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:17:10.294582  302211 out.go:203] 
	W1019 12:17:10.297548  302211 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:17:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:17:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:17:10.297586  302211 out.go:285] * 
	* 
	W1019 12:17:10.304024  302211 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:17:10.307109  302211 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-694780 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.59s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.925077ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-694780
addons_test.go:332: (dbg) Run:  kubectl --context addons-694780 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-694780 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (295.491446ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:17:45.321065  303342 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:17:45.324459  303342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:17:45.324541  303342 out.go:374] Setting ErrFile to fd 2...
	I1019 12:17:45.324570  303342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:17:45.325427  303342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:17:45.327850  303342 mustload.go:65] Loading cluster: addons-694780
	I1019 12:17:45.328378  303342 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:17:45.328434  303342 addons.go:606] checking whether the cluster is paused
	I1019 12:17:45.328599  303342 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:17:45.328633  303342 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:17:45.329193  303342 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:17:45.354832  303342 ssh_runner.go:195] Run: systemctl --version
	I1019 12:17:45.354892  303342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:17:45.374560  303342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:17:45.476338  303342 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:17:45.476425  303342 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:17:45.506727  303342 cri.go:89] found id: "babbcf90f6ac904ad2f1c25a59f3fae6037578ff6c985c97854cc8e67861c441"
	I1019 12:17:45.506750  303342 cri.go:89] found id: "4af26279aa6f2eea07df4d6dc9cb321abc82026e12a909ecae07903da4702995"
	I1019 12:17:45.506756  303342 cri.go:89] found id: "dbc7b2d7b48c237972a607237cc3947e59c7b5443b011c68106e9c392f0d975d"
	I1019 12:17:45.506760  303342 cri.go:89] found id: "53d99b9c1fa5a95c9e37f1a45f62cd51c1afdda905ffeb6b9248b13034343462"
	I1019 12:17:45.506763  303342 cri.go:89] found id: "1159fff2343a5c3e477dab117e1ff2e1a6416a99cdfb9f1705fbd592646d9832"
	I1019 12:17:45.506767  303342 cri.go:89] found id: "976c559427e0253107c6466d60d473a0039bdf7878194ad5bdaca6966253b26b"
	I1019 12:17:45.506770  303342 cri.go:89] found id: "4e8fe40f4a508cc1d1ac055b2b1bf2c19b1903cd3d5775fc32b7874ac809c0d8"
	I1019 12:17:45.506774  303342 cri.go:89] found id: "3c758f6c5602f3f9b9443ccc165652180df691ad854e4d71ce3f716ff6f9a39b"
	I1019 12:17:45.506795  303342 cri.go:89] found id: "c93fad6f2f68178d142a7ba603152834d1f7d544574f809291adbea8ae600e2a"
	I1019 12:17:45.506807  303342 cri.go:89] found id: "d66a0ce31c46f79abdc4cf890ad0bf9a061e0e382b03fc34c6d7bddbfe74e583"
	I1019 12:17:45.506810  303342 cri.go:89] found id: "019ec1d7cee73b30cbc0eb97d1a28afba7149627fff4e81ca5ad784b17e42ce6"
	I1019 12:17:45.506814  303342 cri.go:89] found id: "ad7a2781a873fe6c6ec31e43f52230ed09385cd447ef4cbd60561041e64afaaf"
	I1019 12:17:45.506817  303342 cri.go:89] found id: "80882ef14df043e6e23e23bb0ae867fdf8b865123d2ff32882a7c44cffea2388"
	I1019 12:17:45.506830  303342 cri.go:89] found id: "795c9019de22203a4870134a91ae2e2344e2a0d9a3c45ee2ca515e2465ef1af7"
	I1019 12:17:45.506834  303342 cri.go:89] found id: "1a89d3feb3cc173765597de5bc7c4a783544a76def605ad64c02aba17ef45ca3"
	I1019 12:17:45.506839  303342 cri.go:89] found id: "c1af9139ef29a1d92c70afefa4ebf2ccc782581c328281ec4e2f86b553c3c467"
	I1019 12:17:45.506843  303342 cri.go:89] found id: "c10333b42245b14943c5c33809857b909c2a03945bf30eedb9643814fdd3b23d"
	I1019 12:17:45.506847  303342 cri.go:89] found id: "0e8ae7e9978df10dd5c1ae839fb322082252d2948bb1e640b22d86f207cac350"
	I1019 12:17:45.506850  303342 cri.go:89] found id: "1fbbdaf72898fb8d9d32b6836dde4d8c8bd3aeb32b5e40d0a08e758f67f5eeb9"
	I1019 12:17:45.506853  303342 cri.go:89] found id: "20700ce554fdeeb461937fe8bd8c17a66655f95c7782ad23f8855f6fc85e921d"
	I1019 12:17:45.506880  303342 cri.go:89] found id: "ebc110500cd3df83646f04053eb6ac2cb475cfd7069d77e04732e6c38ee16e85"
	I1019 12:17:45.506890  303342 cri.go:89] found id: "4b12dbb5293748cac62f0aa74605c7890efe62f72b75cd8622373e2ae02a2e7a"
	I1019 12:17:45.506893  303342 cri.go:89] found id: "974f057716664d84b595f63044c6aaf6d840e979157a7453177950977adff06a"
	I1019 12:17:45.506896  303342 cri.go:89] found id: ""
	I1019 12:17:45.506955  303342 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:17:45.522573  303342 out.go:203] 
	W1019 12:17:45.525407  303342 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:17:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:17:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:17:45.525440  303342 out.go:285] * 
	* 
	W1019 12:17:45.531742  303342 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:17:45.534623  303342 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-694780 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-694780 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-694780 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-694780 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [27c750cd-f6fb-411d-883a-a396db6bd44c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [27c750cd-f6fb-411d-883a-a396db6bd44c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.013596608s
I1019 12:17:32.930525  294518 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-694780 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.012721732s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-694780 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-694780
helpers_test.go:243: (dbg) docker inspect addons-694780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1204b177504834de2bad5ed03ffce4ec658a2a7b627e21eea9f07b8d50fe34f6",
	        "Created": "2025-10-19T12:14:08.1789404Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295674,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:14:08.236356286Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/1204b177504834de2bad5ed03ffce4ec658a2a7b627e21eea9f07b8d50fe34f6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1204b177504834de2bad5ed03ffce4ec658a2a7b627e21eea9f07b8d50fe34f6/hostname",
	        "HostsPath": "/var/lib/docker/containers/1204b177504834de2bad5ed03ffce4ec658a2a7b627e21eea9f07b8d50fe34f6/hosts",
	        "LogPath": "/var/lib/docker/containers/1204b177504834de2bad5ed03ffce4ec658a2a7b627e21eea9f07b8d50fe34f6/1204b177504834de2bad5ed03ffce4ec658a2a7b627e21eea9f07b8d50fe34f6-json.log",
	        "Name": "/addons-694780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-694780:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-694780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1204b177504834de2bad5ed03ffce4ec658a2a7b627e21eea9f07b8d50fe34f6",
	                "LowerDir": "/var/lib/docker/overlay2/24b4d74c051b53eb5a98090b6fae5882d58acd7c302d8ac3ca9c1204895981b4-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/24b4d74c051b53eb5a98090b6fae5882d58acd7c302d8ac3ca9c1204895981b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/24b4d74c051b53eb5a98090b6fae5882d58acd7c302d8ac3ca9c1204895981b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/24b4d74c051b53eb5a98090b6fae5882d58acd7c302d8ac3ca9c1204895981b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-694780",
	                "Source": "/var/lib/docker/volumes/addons-694780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-694780",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-694780",
	                "name.minikube.sigs.k8s.io": "addons-694780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "25b961a5947230fb374b7ba5aa98853a7d9052cf5fbe149e8a1cb968e89f5d03",
	            "SandboxKey": "/var/run/docker/netns/25b961a59472",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-694780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:6c:a2:a2:7e:bc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e72e5be0d8e39e54cd93f8c6194d3277252a7c979ea76a31ac8ec3c9e23e57fe",
	                    "EndpointID": "f500bb2a92c27f024cf66fb0bebe85c183d7984b6851977c8ffe1150fba4b24e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-694780",
	                        "1204b1775048"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-694780 -n addons-694780
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-694780 logs -n 25: (1.720270219s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-107639                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-107639 │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │ 19 Oct 25 12:13 UTC │
	│ start   │ --download-only -p binary-mirror-974688 --alsologtostderr --binary-mirror http://127.0.0.1:41571 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-974688   │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │                     │
	│ delete  │ -p binary-mirror-974688                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-974688   │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │ 19 Oct 25 12:13 UTC │
	│ addons  │ enable dashboard -p addons-694780                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │                     │
	│ addons  │ disable dashboard -p addons-694780                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │                     │
	│ start   │ -p addons-694780 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │ 19 Oct 25 12:16 UTC │
	│ addons  │ addons-694780 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:16 UTC │                     │
	│ addons  │ addons-694780 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-694780 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:16 UTC │                     │
	│ addons  │ addons-694780 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:16 UTC │                     │
	│ ip      │ addons-694780 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:17 UTC │ 19 Oct 25 12:17 UTC │
	│ addons  │ addons-694780 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:17 UTC │                     │
	│ addons  │ addons-694780 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:17 UTC │                     │
	│ addons  │ addons-694780 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:17 UTC │                     │
	│ ssh     │ addons-694780 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:17 UTC │                     │
	│ addons  │ addons-694780 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:17 UTC │                     │
	│ addons  │ addons-694780 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:17 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-694780                                                                                                                                                                                                                                                                                                                                                                                           │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:17 UTC │ 19 Oct 25 12:17 UTC │
	│ addons  │ addons-694780 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:17 UTC │                     │
	│ ssh     │ addons-694780 ssh cat /opt/local-path-provisioner/pvc-c2e9b24a-4b9e-48a1-a73a-ec392ca86059_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:17 UTC │ 19 Oct 25 12:17 UTC │
	│ addons  │ addons-694780 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:17 UTC │                     │
	│ addons  │ addons-694780 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:18 UTC │                     │
	│ addons  │ addons-694780 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:18 UTC │                     │
	│ addons  │ addons-694780 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:18 UTC │                     │
	│ ip      │ addons-694780 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:19 UTC │ 19 Oct 25 12:19 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:13:42
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:13:42.003152  295274 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:13:42.003315  295274 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:13:42.003352  295274 out.go:374] Setting ErrFile to fd 2...
	I1019 12:13:42.003359  295274 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:13:42.003730  295274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:13:42.004387  295274 out.go:368] Setting JSON to false
	I1019 12:13:42.005390  295274 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6972,"bootTime":1760869050,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 12:13:42.005488  295274 start.go:141] virtualization:  
	I1019 12:13:42.009048  295274 out.go:179] * [addons-694780] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 12:13:42.013259  295274 notify.go:220] Checking for updates...
	I1019 12:13:42.016485  295274 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:13:42.019473  295274 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:13:42.022577  295274 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 12:13:42.025710  295274 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 12:13:42.028732  295274 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 12:13:42.031894  295274 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:13:42.035161  295274 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:13:42.069927  295274 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 12:13:42.070076  295274 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:13:42.147765  295274 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-19 12:13:42.137429844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 12:13:42.147899  295274 docker.go:318] overlay module found
	I1019 12:13:42.151135  295274 out.go:179] * Using the docker driver based on user configuration
	I1019 12:13:42.154130  295274 start.go:305] selected driver: docker
	I1019 12:13:42.154181  295274 start.go:925] validating driver "docker" against <nil>
	I1019 12:13:42.154203  295274 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:13:42.155033  295274 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:13:42.220415  295274 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-19 12:13:42.20881311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 12:13:42.220594  295274 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 12:13:42.220831  295274 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:13:42.223828  295274 out.go:179] * Using Docker driver with root privileges
	I1019 12:13:42.226742  295274 cni.go:84] Creating CNI manager for ""
	I1019 12:13:42.226821  295274 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:13:42.226834  295274 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 12:13:42.226927  295274 start.go:349] cluster config:
	{Name:addons-694780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-694780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1019 12:13:42.230316  295274 out.go:179] * Starting "addons-694780" primary control-plane node in "addons-694780" cluster
	I1019 12:13:42.233216  295274 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:13:42.236285  295274 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:13:42.239184  295274 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:13:42.239247  295274 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 12:13:42.239281  295274 cache.go:58] Caching tarball of preloaded images
	I1019 12:13:42.239271  295274 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:13:42.239413  295274 preload.go:233] Found /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 12:13:42.239426  295274 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:13:42.239820  295274 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/config.json ...
	I1019 12:13:42.239855  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/config.json: {Name:mk4d2d5e0873fa20b844f128ceba5b32c5ea6045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:13:42.257356  295274 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1019 12:13:42.257521  295274 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1019 12:13:42.257541  295274 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1019 12:13:42.257558  295274 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1019 12:13:42.257566  295274 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1019 12:13:42.257571  295274 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1019 12:14:00.396934  295274 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1019 12:14:00.396972  295274 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:14:00.397005  295274 start.go:360] acquireMachinesLock for addons-694780: {Name:mk35cb5f0a4d472e9c073f15331d1036d68f1f63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:14:00.397160  295274 start.go:364] duration metric: took 134.985µs to acquireMachinesLock for "addons-694780"
	I1019 12:14:00.397192  295274 start.go:93] Provisioning new machine with config: &{Name:addons-694780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-694780 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:14:00.397300  295274 start.go:125] createHost starting for "" (driver="docker")
	I1019 12:14:00.400926  295274 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1019 12:14:00.401206  295274 start.go:159] libmachine.API.Create for "addons-694780" (driver="docker")
	I1019 12:14:00.401264  295274 client.go:168] LocalClient.Create starting
	I1019 12:14:00.401425  295274 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem
	I1019 12:14:00.608971  295274 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem
	I1019 12:14:01.439067  295274 cli_runner.go:164] Run: docker network inspect addons-694780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 12:14:01.454592  295274 cli_runner.go:211] docker network inspect addons-694780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 12:14:01.454696  295274 network_create.go:284] running [docker network inspect addons-694780] to gather additional debugging logs...
	I1019 12:14:01.454720  295274 cli_runner.go:164] Run: docker network inspect addons-694780
	W1019 12:14:01.469995  295274 cli_runner.go:211] docker network inspect addons-694780 returned with exit code 1
	I1019 12:14:01.470025  295274 network_create.go:287] error running [docker network inspect addons-694780]: docker network inspect addons-694780: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-694780 not found
	I1019 12:14:01.470040  295274 network_create.go:289] output of [docker network inspect addons-694780]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-694780 not found
	
	** /stderr **
	I1019 12:14:01.470150  295274 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:14:01.487388  295274 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c46050}
	I1019 12:14:01.487434  295274 network_create.go:124] attempt to create docker network addons-694780 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1019 12:14:01.487489  295274 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-694780 addons-694780
	I1019 12:14:01.547881  295274 network_create.go:108] docker network addons-694780 192.168.49.0/24 created
	I1019 12:14:01.547910  295274 kic.go:121] calculated static IP "192.168.49.2" for the "addons-694780" container
	I1019 12:14:01.547991  295274 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 12:14:01.563837  295274 cli_runner.go:164] Run: docker volume create addons-694780 --label name.minikube.sigs.k8s.io=addons-694780 --label created_by.minikube.sigs.k8s.io=true
	I1019 12:14:01.584920  295274 oci.go:103] Successfully created a docker volume addons-694780
	I1019 12:14:01.585017  295274 cli_runner.go:164] Run: docker run --rm --name addons-694780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-694780 --entrypoint /usr/bin/test -v addons-694780:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 12:14:03.689636  295274 cli_runner.go:217] Completed: docker run --rm --name addons-694780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-694780 --entrypoint /usr/bin/test -v addons-694780:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (2.10457888s)
	I1019 12:14:03.689667  295274 oci.go:107] Successfully prepared a docker volume addons-694780
	I1019 12:14:03.689727  295274 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:14:03.689775  295274 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 12:14:03.689863  295274 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-694780:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 12:14:08.112616  295274 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-694780:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.422712279s)
	I1019 12:14:08.112651  295274 kic.go:203] duration metric: took 4.422884967s to extract preloaded images to volume ...
	W1019 12:14:08.112812  295274 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 12:14:08.112922  295274 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 12:14:08.164188  295274 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-694780 --name addons-694780 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-694780 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-694780 --network addons-694780 --ip 192.168.49.2 --volume addons-694780:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 12:14:08.447305  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Running}}
	I1019 12:14:08.467652  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:08.489554  295274 cli_runner.go:164] Run: docker exec addons-694780 stat /var/lib/dpkg/alternatives/iptables
	I1019 12:14:08.539040  295274 oci.go:144] the created container "addons-694780" has a running status.
	I1019 12:14:08.539067  295274 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa...
	I1019 12:14:08.959464  295274 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 12:14:08.999733  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:09.019149  295274 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 12:14:09.019170  295274 kic_runner.go:114] Args: [docker exec --privileged addons-694780 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 12:14:09.060907  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:09.079146  295274 machine.go:93] provisionDockerMachine start ...
	I1019 12:14:09.079247  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:09.096513  295274 main.go:141] libmachine: Using SSH client type: native
	I1019 12:14:09.096856  295274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1019 12:14:09.096865  295274 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:14:09.097531  295274 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1019 12:14:12.244930  295274 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-694780
	
	I1019 12:14:12.244955  295274 ubuntu.go:182] provisioning hostname "addons-694780"
	I1019 12:14:12.245016  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:12.262129  295274 main.go:141] libmachine: Using SSH client type: native
	I1019 12:14:12.262441  295274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1019 12:14:12.262458  295274 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-694780 && echo "addons-694780" | sudo tee /etc/hostname
	I1019 12:14:12.418418  295274 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-694780
	
	I1019 12:14:12.418524  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:12.435143  295274 main.go:141] libmachine: Using SSH client type: native
	I1019 12:14:12.435470  295274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1019 12:14:12.435493  295274 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-694780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-694780/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-694780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:14:12.581596  295274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:14:12.581624  295274 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 12:14:12.581652  295274 ubuntu.go:190] setting up certificates
	I1019 12:14:12.581663  295274 provision.go:84] configureAuth start
	I1019 12:14:12.581740  295274 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-694780
	I1019 12:14:12.602074  295274 provision.go:143] copyHostCerts
	I1019 12:14:12.602167  295274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 12:14:12.602325  295274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 12:14:12.602387  295274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 12:14:12.602436  295274 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.addons-694780 san=[127.0.0.1 192.168.49.2 addons-694780 localhost minikube]
	I1019 12:14:12.862682  295274 provision.go:177] copyRemoteCerts
	I1019 12:14:12.862749  295274 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:14:12.862789  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:12.882034  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:12.985295  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:14:13.003021  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1019 12:14:13.021703  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 12:14:13.039512  295274 provision.go:87] duration metric: took 457.823307ms to configureAuth
	I1019 12:14:13.039538  295274 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:14:13.039731  295274 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:14:13.039842  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:13.056705  295274 main.go:141] libmachine: Using SSH client type: native
	I1019 12:14:13.057015  295274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1019 12:14:13.057035  295274 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:14:13.309490  295274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:14:13.309510  295274 machine.go:96] duration metric: took 4.230345285s to provisionDockerMachine
	I1019 12:14:13.309520  295274 client.go:171] duration metric: took 12.908242335s to LocalClient.Create
	I1019 12:14:13.309533  295274 start.go:167] duration metric: took 12.908329171s to libmachine.API.Create "addons-694780"
	I1019 12:14:13.309540  295274 start.go:293] postStartSetup for "addons-694780" (driver="docker")
	I1019 12:14:13.309550  295274 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:14:13.309614  295274 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:14:13.309668  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:13.327621  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:13.433731  295274 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:14:13.437088  295274 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:14:13.437153  295274 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:14:13.437171  295274 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/addons for local assets ...
	I1019 12:14:13.437254  295274 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/files for local assets ...
	I1019 12:14:13.437281  295274 start.go:296] duration metric: took 127.735602ms for postStartSetup
	I1019 12:14:13.437606  295274 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-694780
	I1019 12:14:13.453886  295274 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/config.json ...
	I1019 12:14:13.454182  295274 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:14:13.454232  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:13.470698  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:13.570435  295274 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:14:13.574907  295274 start.go:128] duration metric: took 13.177582447s to createHost
	I1019 12:14:13.574933  295274 start.go:83] releasing machines lock for "addons-694780", held for 13.177762528s
	I1019 12:14:13.575004  295274 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-694780
	I1019 12:14:13.591947  295274 ssh_runner.go:195] Run: cat /version.json
	I1019 12:14:13.592006  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:13.592275  295274 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:14:13.592343  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:13.610194  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:13.613784  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:13.802843  295274 ssh_runner.go:195] Run: systemctl --version
	I1019 12:14:13.809147  295274 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:14:13.845006  295274 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:14:13.849342  295274 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:14:13.849414  295274 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:14:13.878459  295274 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 12:14:13.878485  295274 start.go:495] detecting cgroup driver to use...
	I1019 12:14:13.878546  295274 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 12:14:13.878611  295274 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:14:13.895191  295274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:14:13.907468  295274 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:14:13.907535  295274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:14:13.924986  295274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:14:13.943207  295274 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:14:14.060735  295274 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:14:14.190231  295274 docker.go:234] disabling docker service ...
	I1019 12:14:14.190303  295274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:14:14.210252  295274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:14:14.223688  295274 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:14:14.341737  295274 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:14:14.465040  295274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:14:14.476952  295274 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:14:14.491108  295274 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:14:14.491194  295274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:14:14.499208  295274 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 12:14:14.499300  295274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:14:14.507935  295274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:14:14.516399  295274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:14:14.524574  295274 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:14:14.532356  295274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:14:14.540789  295274 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:14:14.554659  295274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:14:14.563160  295274 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:14:14.570601  295274 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:14:14.578071  295274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:14:14.696899  295274 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:14:14.818827  295274 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:14:14.818914  295274 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:14:14.822875  295274 start.go:563] Will wait 60s for crictl version
	I1019 12:14:14.822937  295274 ssh_runner.go:195] Run: which crictl
	I1019 12:14:14.826449  295274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:14:14.855395  295274 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:14:14.855493  295274 ssh_runner.go:195] Run: crio --version
	I1019 12:14:14.885643  295274 ssh_runner.go:195] Run: crio --version
	I1019 12:14:14.918034  295274 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:14:14.920933  295274 cli_runner.go:164] Run: docker network inspect addons-694780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:14:14.937811  295274 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1019 12:14:14.941603  295274 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:14:14.952026  295274 kubeadm.go:883] updating cluster {Name:addons-694780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-694780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:14:14.952163  295274 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:14:14.952230  295274 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:14:14.985307  295274 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:14:14.985331  295274 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:14:14.985386  295274 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:14:15.021251  295274 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:14:15.021277  295274 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:14:15.021285  295274 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1019 12:14:15.021388  295274 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-694780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-694780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:14:15.021481  295274 ssh_runner.go:195] Run: crio config
	I1019 12:14:15.102018  295274 cni.go:84] Creating CNI manager for ""
	I1019 12:14:15.102042  295274 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:14:15.102070  295274 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:14:15.102096  295274 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-694780 NodeName:addons-694780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:14:15.102227  295274 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-694780"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:14:15.102309  295274 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:14:15.111146  295274 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:14:15.111238  295274 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:14:15.119726  295274 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1019 12:14:15.133417  295274 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:14:15.147105  295274 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1019 12:14:15.160654  295274 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:14:15.164403  295274 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:14:15.174825  295274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:14:15.290298  295274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:14:15.305425  295274 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780 for IP: 192.168.49.2
	I1019 12:14:15.305496  295274 certs.go:195] generating shared ca certs ...
	I1019 12:14:15.305525  295274 certs.go:227] acquiring lock for ca certs: {Name:mk8f2f1c683cf5104ef70f6f3d59bf8f6240d633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:15.305710  295274 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key
	I1019 12:14:15.699481  295274 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt ...
	I1019 12:14:15.699513  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt: {Name:mkbdb340720a23421771727d8d82cd155586a3a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:15.699711  295274 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key ...
	I1019 12:14:15.699725  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key: {Name:mkb0f5ea7800903ee705f0d24dab1dda42de7cf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:15.700470  295274 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key
	I1019 12:14:16.978428  295274 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt ...
	I1019 12:14:16.978461  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt: {Name:mk478a139f219e0253a4433782505f57036a141f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:16.979250  295274 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key ...
	I1019 12:14:16.979270  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key: {Name:mk262cdae2421416cd180a921f27e81c3d2f5e0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:16.979923  295274 certs.go:257] generating profile certs ...
	I1019 12:14:16.979995  295274 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.key
	I1019 12:14:16.980016  295274 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt with IP's: []
	I1019 12:14:17.175754  295274 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt ...
	I1019 12:14:17.175788  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: {Name:mk49143f236dccb148777098ef32cfeedec13fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:17.175979  295274 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.key ...
	I1019 12:14:17.175996  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.key: {Name:mk45d333176afd19a6094b3d6823bdfa3b87aaab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:17.176734  295274 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.key.0b167051
	I1019 12:14:17.176759  295274 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.crt.0b167051 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1019 12:14:17.852454  295274 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.crt.0b167051 ...
	I1019 12:14:17.852485  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.crt.0b167051: {Name:mk0196843a6b59e70435c85f289b7fcb0e8b8230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:17.853355  295274 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.key.0b167051 ...
	I1019 12:14:17.853372  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.key.0b167051: {Name:mk17476e861b595ca5cc127a8d4936060a774bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:17.854045  295274 certs.go:382] copying /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.crt.0b167051 -> /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.crt
	I1019 12:14:17.854173  295274 certs.go:386] copying /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.key.0b167051 -> /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.key
	I1019 12:14:17.854235  295274 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/proxy-client.key
	I1019 12:14:17.854257  295274 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/proxy-client.crt with IP's: []
	I1019 12:14:19.142733  295274 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/proxy-client.crt ...
	I1019 12:14:19.142766  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/proxy-client.crt: {Name:mke18feaf7a8d5491aa718a872ccbfff12b25f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:19.143550  295274 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/proxy-client.key ...
	I1019 12:14:19.143570  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/proxy-client.key: {Name:mk7208500ffb6cf3608b744a96af205315fe241d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:19.144405  295274 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 12:14:19.144465  295274 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:14:19.144494  295274 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:14:19.144521  295274 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem (1679 bytes)
	I1019 12:14:19.145165  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:14:19.163832  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:14:19.183901  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:14:19.204575  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 12:14:19.224308  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 12:14:19.243250  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:14:19.260449  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:14:19.278256  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:14:19.296406  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:14:19.313736  295274 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:14:19.326189  295274 ssh_runner.go:195] Run: openssl version
	I1019 12:14:19.332244  295274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:14:19.340993  295274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:14:19.344493  295274 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:14:19.344557  295274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:14:19.387279  295274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:14:19.395469  295274 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:14:19.398969  295274 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 12:14:19.399021  295274 kubeadm.go:400] StartCluster: {Name:addons-694780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-694780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:14:19.399091  295274 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:14:19.399144  295274 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:14:19.425294  295274 cri.go:89] found id: ""
	I1019 12:14:19.425372  295274 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:14:19.432877  295274 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 12:14:19.440367  295274 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1019 12:14:19.440433  295274 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 12:14:19.448066  295274 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 12:14:19.448130  295274 kubeadm.go:157] found existing configuration files:
	
	I1019 12:14:19.448207  295274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 12:14:19.455735  295274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 12:14:19.455811  295274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 12:14:19.463349  295274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 12:14:19.470905  295274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 12:14:19.470989  295274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 12:14:19.478044  295274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 12:14:19.485424  295274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 12:14:19.485741  295274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 12:14:19.496364  295274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 12:14:19.503984  295274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 12:14:19.504078  295274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 12:14:19.511378  295274 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 12:14:19.578409  295274 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1019 12:14:19.578748  295274 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1019 12:14:19.650148  295274 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 12:14:37.399579  295274 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1019 12:14:37.399639  295274 kubeadm.go:318] [preflight] Running pre-flight checks
	I1019 12:14:37.399767  295274 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1019 12:14:37.399836  295274 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 12:14:37.399872  295274 kubeadm.go:318] OS: Linux
	I1019 12:14:37.399920  295274 kubeadm.go:318] CGROUPS_CPU: enabled
	I1019 12:14:37.399971  295274 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1019 12:14:37.400021  295274 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1019 12:14:37.400072  295274 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1019 12:14:37.400122  295274 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1019 12:14:37.400175  295274 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1019 12:14:37.400222  295274 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1019 12:14:37.400272  295274 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1019 12:14:37.400320  295274 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1019 12:14:37.400396  295274 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 12:14:37.400494  295274 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 12:14:37.400588  295274 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 12:14:37.400677  295274 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 12:14:37.403847  295274 out.go:252]   - Generating certificates and keys ...
	I1019 12:14:37.403937  295274 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1019 12:14:37.404010  295274 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1019 12:14:37.404091  295274 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 12:14:37.404155  295274 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1019 12:14:37.404221  295274 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1019 12:14:37.404278  295274 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1019 12:14:37.404338  295274 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1019 12:14:37.404462  295274 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-694780 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1019 12:14:37.404521  295274 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1019 12:14:37.404644  295274 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-694780 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1019 12:14:37.404715  295274 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 12:14:37.404785  295274 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 12:14:37.404842  295274 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1019 12:14:37.404901  295274 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 12:14:37.404960  295274 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 12:14:37.405024  295274 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 12:14:37.405086  295274 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 12:14:37.405158  295274 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 12:14:37.405220  295274 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 12:14:37.405309  295274 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 12:14:37.405382  295274 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 12:14:37.410146  295274 out.go:252]   - Booting up control plane ...
	I1019 12:14:37.410270  295274 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 12:14:37.410356  295274 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 12:14:37.410469  295274 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 12:14:37.410641  295274 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 12:14:37.410755  295274 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 12:14:37.410887  295274 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 12:14:37.410982  295274 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 12:14:37.411029  295274 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1019 12:14:37.411169  295274 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 12:14:37.411280  295274 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 12:14:37.411345  295274 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500795726s
	I1019 12:14:37.411445  295274 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 12:14:37.411532  295274 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1019 12:14:37.411628  295274 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 12:14:37.411713  295274 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 12:14:37.411795  295274 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.296248699s
	I1019 12:14:37.411868  295274 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.966396845s
	I1019 12:14:37.411942  295274 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502030707s
	I1019 12:14:37.412055  295274 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 12:14:37.412189  295274 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 12:14:37.412253  295274 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 12:14:37.412456  295274 kubeadm.go:318] [mark-control-plane] Marking the node addons-694780 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 12:14:37.412518  295274 kubeadm.go:318] [bootstrap-token] Using token: ye03ax.m6ox9zrlec6c94l4
	I1019 12:14:37.415499  295274 out.go:252]   - Configuring RBAC rules ...
	I1019 12:14:37.415668  295274 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 12:14:37.415778  295274 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 12:14:37.415942  295274 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 12:14:37.416076  295274 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 12:14:37.416196  295274 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 12:14:37.416285  295274 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 12:14:37.416405  295274 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 12:14:37.416450  295274 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1019 12:14:37.416498  295274 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1019 12:14:37.416502  295274 kubeadm.go:318] 
	I1019 12:14:37.416565  295274 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1019 12:14:37.416569  295274 kubeadm.go:318] 
	I1019 12:14:37.416649  295274 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1019 12:14:37.416653  295274 kubeadm.go:318] 
	I1019 12:14:37.416680  295274 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1019 12:14:37.416761  295274 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 12:14:37.416814  295274 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 12:14:37.416819  295274 kubeadm.go:318] 
	I1019 12:14:37.416876  295274 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1019 12:14:37.416879  295274 kubeadm.go:318] 
	I1019 12:14:37.416929  295274 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 12:14:37.416933  295274 kubeadm.go:318] 
	I1019 12:14:37.416987  295274 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1019 12:14:37.417065  295274 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 12:14:37.417136  295274 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 12:14:37.417141  295274 kubeadm.go:318] 
	I1019 12:14:37.417228  295274 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 12:14:37.417308  295274 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1019 12:14:37.417312  295274 kubeadm.go:318] 
	I1019 12:14:37.417400  295274 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ye03ax.m6ox9zrlec6c94l4 \
	I1019 12:14:37.417508  295274 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0ee0bbb0fbfe8419c71683408bd38502dbf921f3cb179cb0365daeb92f967309 \
	I1019 12:14:37.417529  295274 kubeadm.go:318] 	--control-plane 
	I1019 12:14:37.417533  295274 kubeadm.go:318] 
	I1019 12:14:37.417621  295274 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1019 12:14:37.417625  295274 kubeadm.go:318] 
	I1019 12:14:37.417762  295274 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ye03ax.m6ox9zrlec6c94l4 \
	I1019 12:14:37.417889  295274 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0ee0bbb0fbfe8419c71683408bd38502dbf921f3cb179cb0365daeb92f967309 
	I1019 12:14:37.417900  295274 cni.go:84] Creating CNI manager for ""
	I1019 12:14:37.417908  295274 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:14:37.420883  295274 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 12:14:37.423738  295274 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 12:14:37.428479  295274 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 12:14:37.428557  295274 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 12:14:37.442627  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 12:14:37.733181  295274 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 12:14:37.733266  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:37.733352  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-694780 minikube.k8s.io/updated_at=2025_10_19T12_14_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99 minikube.k8s.io/name=addons-694780 minikube.k8s.io/primary=true
	I1019 12:14:37.865478  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:37.865544  295274 ops.go:34] apiserver oom_adj: -16
	I1019 12:14:38.365656  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:38.865627  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:39.366041  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:39.866521  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:40.366046  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:40.866435  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:41.366263  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:41.865749  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:42.046716  295274 kubeadm.go:1113] duration metric: took 4.313504373s to wait for elevateKubeSystemPrivileges
	I1019 12:14:42.046825  295274 kubeadm.go:402] duration metric: took 22.647794039s to StartCluster
	I1019 12:14:42.046884  295274 settings.go:142] acquiring lock: {Name:mk1099ab6cbf86eca031b5f8e2b43952c9c0f84f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:42.047667  295274 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 12:14:42.048191  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:42.049162  295274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 12:14:42.049503  295274 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:14:42.049655  295274 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1019 12:14:42.049753  295274 addons.go:69] Setting yakd=true in profile "addons-694780"
	I1019 12:14:42.049767  295274 addons.go:238] Setting addon yakd=true in "addons-694780"
	I1019 12:14:42.049790  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.050262  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.049616  295274 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:14:42.050832  295274 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-694780"
	I1019 12:14:42.050848  295274 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-694780"
	I1019 12:14:42.050872  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.051290  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.054229  295274 addons.go:69] Setting cloud-spanner=true in profile "addons-694780"
	I1019 12:14:42.054318  295274 addons.go:238] Setting addon cloud-spanner=true in "addons-694780"
	I1019 12:14:42.054409  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.055024  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.055671  295274 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-694780"
	I1019 12:14:42.055716  295274 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-694780"
	I1019 12:14:42.055752  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.056172  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.057603  295274 out.go:179] * Verifying Kubernetes components...
	I1019 12:14:42.057948  295274 addons.go:69] Setting storage-provisioner=true in profile "addons-694780"
	I1019 12:14:42.057976  295274 addons.go:238] Setting addon storage-provisioner=true in "addons-694780"
	I1019 12:14:42.058019  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.058463  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.068714  295274 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-694780"
	I1019 12:14:42.069042  295274 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-694780"
	I1019 12:14:42.068903  295274 addons.go:69] Setting volcano=true in profile "addons-694780"
	I1019 12:14:42.069314  295274 addons.go:238] Setting addon volcano=true in "addons-694780"
	I1019 12:14:42.069358  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.069885  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.081649  295274 addons.go:69] Setting default-storageclass=true in profile "addons-694780"
	I1019 12:14:42.081761  295274 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-694780"
	I1019 12:14:42.082176  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.068919  295274 addons.go:69] Setting volumesnapshots=true in profile "addons-694780"
	I1019 12:14:42.083054  295274 addons.go:238] Setting addon volumesnapshots=true in "addons-694780"
	I1019 12:14:42.086299  295274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:14:42.087238  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.087560  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.094842  295274 addons.go:69] Setting gcp-auth=true in profile "addons-694780"
	I1019 12:14:42.095131  295274 mustload.go:65] Loading cluster: addons-694780
	I1019 12:14:42.095673  295274 addons.go:69] Setting ingress=true in profile "addons-694780"
	I1019 12:14:42.095706  295274 addons.go:238] Setting addon ingress=true in "addons-694780"
	I1019 12:14:42.095751  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.096232  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.104439  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.123857  295274 addons.go:69] Setting ingress-dns=true in profile "addons-694780"
	I1019 12:14:42.123899  295274 addons.go:238] Setting addon ingress-dns=true in "addons-694780"
	I1019 12:14:42.123947  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.124445  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.128186  295274 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:14:42.128585  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.143647  295274 addons.go:69] Setting inspektor-gadget=true in profile "addons-694780"
	I1019 12:14:42.143694  295274 addons.go:238] Setting addon inspektor-gadget=true in "addons-694780"
	I1019 12:14:42.143734  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.144229  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.172000  295274 addons.go:69] Setting metrics-server=true in profile "addons-694780"
	I1019 12:14:42.172033  295274 addons.go:238] Setting addon metrics-server=true in "addons-694780"
	I1019 12:14:42.172081  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.172616  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.211571  295274 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-694780"
	I1019 12:14:42.211604  295274 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-694780"
	I1019 12:14:42.211648  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.212425  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.234780  295274 addons.go:69] Setting registry=true in profile "addons-694780"
	I1019 12:14:42.234824  295274 addons.go:238] Setting addon registry=true in "addons-694780"
	I1019 12:14:42.234865  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.235384  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.257240  295274 addons.go:69] Setting registry-creds=true in profile "addons-694780"
	I1019 12:14:42.257271  295274 addons.go:238] Setting addon registry-creds=true in "addons-694780"
	I1019 12:14:42.257321  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.257910  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.304452  295274 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1019 12:14:42.310048  295274 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 12:14:42.310075  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1019 12:14:42.310142  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.327149  295274 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1019 12:14:42.327273  295274 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	W1019 12:14:42.349904  295274 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1019 12:14:42.356888  295274 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1019 12:14:42.356913  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1019 12:14:42.356978  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.373431  295274 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 12:14:42.373550  295274 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1019 12:14:42.377344  295274 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 12:14:42.377489  295274 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1019 12:14:42.377504  295274 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1019 12:14:42.377570  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.380304  295274 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 12:14:42.380326  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1019 12:14:42.380388  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.390018  295274 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1019 12:14:42.424298  295274 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:14:42.434986  295274 out.go:179]   - Using image docker.io/registry:3.0.0
	I1019 12:14:42.435118  295274 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:14:42.435130  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:14:42.435228  295274 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1019 12:14:42.435247  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.450638  295274 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1019 12:14:42.450764  295274 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1019 12:14:42.451023  295274 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 12:14:42.451062  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1019 12:14:42.451158  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.465959  295274 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1019 12:14:42.466030  295274 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1019 12:14:42.466126  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.480125  295274 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-694780"
	I1019 12:14:42.480179  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.480602  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.495819  295274 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1019 12:14:42.496931  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1019 12:14:42.497039  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.504172  295274 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1019 12:14:42.513109  295274 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1019 12:14:42.513138  295274 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1019 12:14:42.513210  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.553875  295274 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1019 12:14:42.554085  295274 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1019 12:14:42.554130  295274 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1019 12:14:42.554241  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.554636  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.554666  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.555560  295274 addons.go:238] Setting addon default-storageclass=true in "addons-694780"
	I1019 12:14:42.561866  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.562066  295274 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1019 12:14:42.562376  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.562655  295274 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1019 12:14:42.562670  295274 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1019 12:14:42.562725  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.555762  295274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 12:14:42.582623  295274 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 12:14:42.582643  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1019 12:14:42.582701  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.583213  295274 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1019 12:14:42.586736  295274 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1019 12:14:42.594936  295274 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1019 12:14:42.596347  295274 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 12:14:42.596364  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1019 12:14:42.596433  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.613346  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.614178  295274 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1019 12:14:42.617594  295274 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1019 12:14:42.624156  295274 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1019 12:14:42.628362  295274 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1019 12:14:42.628390  295274 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1019 12:14:42.628458  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.637110  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.650493  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.666049  295274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:14:42.691835  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.708777  295274 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1019 12:14:42.715731  295274 out.go:179]   - Using image docker.io/busybox:stable
	I1019 12:14:42.718594  295274 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 12:14:42.718612  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1019 12:14:42.718685  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.734783  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.762208  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.768081  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.772404  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.776636  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	W1019 12:14:42.780525  295274 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1019 12:14:42.780559  295274 retry.go:31] will retry after 142.015581ms: ssh: handshake failed: EOF
	I1019 12:14:42.791076  295274 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:14:42.791097  295274 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:14:42.791157  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.807344  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.832094  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.838779  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.855571  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	W1019 12:14:42.870667  295274 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1019 12:14:42.870699  295274 retry.go:31] will retry after 242.668209ms: ssh: handshake failed: EOF
	W1019 12:14:42.923492  295274 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1019 12:14:42.923519  295274 retry.go:31] will retry after 511.228858ms: ssh: handshake failed: EOF
	I1019 12:14:43.125434  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 12:14:43.310785  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:14:43.315491  295274 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1019 12:14:43.315516  295274 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1019 12:14:43.397009  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 12:14:43.459067  295274 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1019 12:14:43.459139  295274 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1019 12:14:43.461552  295274 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:14:43.461613  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1019 12:14:43.463547  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1019 12:14:43.484201  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 12:14:43.495509  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 12:14:43.497947  295274 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1019 12:14:43.498023  295274 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1019 12:14:43.531347  295274 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1019 12:14:43.531436  295274 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1019 12:14:43.588287  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:14:43.596328  295274 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1019 12:14:43.596395  295274 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1019 12:14:43.598543  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 12:14:43.600233  295274 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1019 12:14:43.600289  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1019 12:14:43.637199  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:14:43.653811  295274 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1019 12:14:43.653886  295274 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1019 12:14:43.699271  295274 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1019 12:14:43.699344  295274 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1019 12:14:43.714852  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 12:14:43.730720  295274 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1019 12:14:43.730794  295274 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1019 12:14:43.750100  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1019 12:14:43.816953  295274 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.237037083s)
	I1019 12:14:43.816978  295274 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1019 12:14:43.817940  295274 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.151870059s)
	I1019 12:14:43.818559  295274 node_ready.go:35] waiting up to 6m0s for node "addons-694780" to be "Ready" ...
	I1019 12:14:43.827978  295274 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1019 12:14:43.828044  295274 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1019 12:14:43.944118  295274 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1019 12:14:43.944184  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1019 12:14:43.979810  295274 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1019 12:14:43.979874  295274 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1019 12:14:44.081228  295274 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1019 12:14:44.081294  295274 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1019 12:14:44.147101  295274 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1019 12:14:44.147173  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1019 12:14:44.149652  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1019 12:14:44.159044  295274 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1019 12:14:44.159110  295274 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1019 12:14:44.228229  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.102748298s)
	I1019 12:14:44.246408  295274 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 12:14:44.246493  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1019 12:14:44.322723  295274 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-694780" context rescaled to 1 replicas
	I1019 12:14:44.332453  295274 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1019 12:14:44.332518  295274 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1019 12:14:44.336837  295274 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1019 12:14:44.336910  295274 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1019 12:14:44.354259  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 12:14:44.505302  295274 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1019 12:14:44.505382  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1019 12:14:44.517502  295274 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 12:14:44.517569  295274 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1019 12:14:44.676881  295274 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1019 12:14:44.676908  295274 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1019 12:14:44.687278  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 12:14:44.830409  295274 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1019 12:14:44.830439  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1019 12:14:45.178510  295274 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1019 12:14:45.178540  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1019 12:14:45.401247  295274 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1019 12:14:45.401278  295274 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1019 12:14:45.695397  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1019 12:14:45.866520  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:14:46.658762  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.261721725s)
	I1019 12:14:46.658825  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.195215431s)
	I1019 12:14:46.658866  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.174595059s)
	I1019 12:14:46.658960  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.348135455s)
	I1019 12:14:48.322222  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.826630978s)
	I1019 12:14:48.322258  295274 addons.go:479] Verifying addon ingress=true in "addons-694780"
	I1019 12:14:48.322423  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.734060459s)
	I1019 12:14:48.322666  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.72406112s)
	I1019 12:14:48.322777  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.685505354s)
	W1019 12:14:48.322798  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:48.322812  295274 retry.go:31] will retry after 194.823768ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:48.322866  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.607949186s)
	I1019 12:14:48.323006  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.572834224s)
	I1019 12:14:48.323023  295274 addons.go:479] Verifying addon registry=true in "addons-694780"
	I1019 12:14:48.323498  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.173752492s)
	I1019 12:14:48.323663  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.96932715s)
	W1019 12:14:48.324592  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 12:14:48.324623  295274 retry.go:31] will retry after 237.421838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 12:14:48.323753  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.636441908s)
	I1019 12:14:48.324644  295274 addons.go:479] Verifying addon metrics-server=true in "addons-694780"
	I1019 12:14:48.325525  295274 out.go:179] * Verifying ingress addon...
	I1019 12:14:48.327694  295274 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-694780 service yakd-dashboard -n yakd-dashboard
	
	I1019 12:14:48.327731  295274 out.go:179] * Verifying registry addon...
	W1019 12:14:48.329194  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:14:48.330875  295274 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1019 12:14:48.336333  295274 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1019 12:14:48.347942  295274 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 12:14:48.347969  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:48.348112  295274 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1019 12:14:48.348126  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:14:48.358956  295274 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1019 12:14:48.518591  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:14:48.562921  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 12:14:48.601822  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.906377582s)
	I1019 12:14:48.601854  295274 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-694780"
	I1019 12:14:48.605106  295274 out.go:179] * Verifying csi-hostpath-driver addon...
	I1019 12:14:48.607924  295274 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1019 12:14:48.622940  295274 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 12:14:48.622967  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:48.834337  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:48.839004  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:49.117097  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:49.342167  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:49.343826  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:49.611923  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:49.629810  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.111127425s)
	W1019 12:14:49.629890  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:49.629916  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.066955932s)
	I1019 12:14:49.629924  295274 retry.go:31] will retry after 448.155711ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:49.834440  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:49.839467  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:50.079158  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:14:50.111958  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:50.171043  295274 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1019 12:14:50.171196  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:50.194357  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:50.335076  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:50.339667  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:50.347272  295274 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1019 12:14:50.360621  295274 addons.go:238] Setting addon gcp-auth=true in "addons-694780"
	I1019 12:14:50.360673  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:50.361113  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:50.399436  295274 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1019 12:14:50.399494  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:50.422224  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:50.611999  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:14:50.821363  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:14:50.835258  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:50.839415  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1019 12:14:50.961482  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:50.961566  295274 retry.go:31] will retry after 562.795912ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:50.965219  295274 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 12:14:50.968201  295274 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1019 12:14:50.971165  295274 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1019 12:14:50.971188  295274 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1019 12:14:50.985512  295274 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1019 12:14:50.985538  295274 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1019 12:14:50.998282  295274 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 12:14:50.998310  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1019 12:14:51.013425  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 12:14:51.111969  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:51.338527  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:51.340021  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:51.497723  295274 addons.go:479] Verifying addon gcp-auth=true in "addons-694780"
	I1019 12:14:51.502848  295274 out.go:179] * Verifying gcp-auth addon...
	I1019 12:14:51.506527  295274 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1019 12:14:51.514789  295274 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1019 12:14:51.514859  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:51.524977  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:14:51.611173  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:51.834593  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:51.839147  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:52.010555  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:52.111504  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:14:52.306240  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:52.306275  295274 retry.go:31] will retry after 524.797045ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:52.333921  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:52.339317  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:52.509761  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:52.612075  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:14:52.823142  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:14:52.831466  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:14:52.835433  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:52.839333  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:53.009565  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:53.111424  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:53.334733  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:53.339441  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:53.510702  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:53.611605  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:14:53.659425  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:53.659460  295274 retry.go:31] will retry after 1.836989408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:53.834283  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:53.839853  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:54.010116  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:54.110937  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:54.334075  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:54.339825  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:54.509673  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:54.611831  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:54.834681  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:54.839492  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:55.010863  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:55.111517  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:14:55.321578  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:14:55.334921  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:55.339419  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:55.496920  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:14:55.510082  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:55.611537  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:55.836283  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:55.839769  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:56.016846  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:56.111318  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:14:56.300093  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:56.300129  295274 retry.go:31] will retry after 1.362357652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:56.335930  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:56.345880  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:56.510118  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:56.611253  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:56.835231  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:56.839841  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:57.010173  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:57.110836  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:14:57.323574  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:14:57.334934  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:57.339528  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:57.509535  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:57.611453  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:57.662711  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:14:57.834131  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:57.839700  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:58.010394  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:58.111507  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:58.334777  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:58.339543  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1019 12:14:58.484484  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:58.484514  295274 retry.go:31] will retry after 2.965888162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:58.509364  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:58.611405  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:58.834448  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:58.839295  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:59.010258  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:59.111591  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:59.333744  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:59.339693  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:59.509279  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:59.611044  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:14:59.822164  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:14:59.834075  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:59.839918  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:00.014436  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:00.137612  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:00.346105  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:00.346184  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:00.512182  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:00.612107  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:00.835582  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:00.840655  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:01.010251  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:01.113264  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:01.334837  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:01.340141  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:01.451502  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:15:01.511042  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:01.611694  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:01.822206  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:15:01.834513  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:01.839504  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:02.010464  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:02.112347  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:02.272774  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:02.272806  295274 retry.go:31] will retry after 5.185809845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:02.335204  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:02.340165  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:02.510448  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:02.611397  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:02.834905  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:02.839914  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:03.009854  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:03.110900  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:03.334543  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:03.339352  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:03.510544  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:03.612023  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:03.834695  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:03.839403  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:04.011189  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:04.110972  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:04.322178  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:15:04.334394  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:04.339846  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:04.509808  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:04.611615  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:04.834767  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:04.839650  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:05.009751  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:05.111997  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:05.334254  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:05.340131  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:05.510105  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:05.612166  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:05.834315  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:05.839111  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:06.010405  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:06.111489  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:06.322749  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:15:06.334602  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:06.339455  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:06.510321  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:06.611003  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:06.834746  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:06.839227  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:07.009494  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:07.111206  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:07.334116  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:07.340085  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:07.459331  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:15:07.510035  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:07.611686  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:07.834914  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:07.839475  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:08.010197  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:08.111747  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:08.253798  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:08.253829  295274 retry.go:31] will retry after 6.015658051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 12:15:08.323676  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:15:08.338520  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:08.339961  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:08.513176  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:08.610999  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:08.834948  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:08.839483  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:09.009598  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:09.111374  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:09.334599  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:09.339052  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:09.510174  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:09.611196  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:09.833973  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:09.839451  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:10.010657  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:10.111848  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:10.334616  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:10.339090  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:10.510016  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:10.610873  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:10.821752  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:15:10.835026  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:10.839602  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:11.010023  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:11.112154  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:11.334837  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:11.339761  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:11.509646  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:11.611417  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:11.833853  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:11.839625  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:12.010774  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:12.111823  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:12.334724  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:12.338999  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:12.510114  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:12.611231  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:12.822184  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:15:12.834394  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:12.839964  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:13.010051  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:13.110954  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:13.333975  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:13.339892  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:13.510124  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:13.610802  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:13.834692  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:13.839597  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:14.010028  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:14.112310  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:14.270605  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:15:14.334742  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:14.339714  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:14.509884  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:14.611890  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:14.822591  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:15:14.835698  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:14.839454  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:15.012928  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:15:15.081484  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:15.081520  295274 retry.go:31] will retry after 5.916791874s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:15.112059  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:15.333631  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:15.339577  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:15.509913  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:15.610874  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:15.833785  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:15.839141  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:16.010236  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:16.111511  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:16.339018  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:16.340386  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:16.509283  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:16.611216  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:16.834127  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:16.839726  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:17.009879  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:17.111951  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:17.321838  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:15:17.334692  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:17.339176  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:17.510153  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:17.611120  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:17.833814  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:17.839080  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:18.011212  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:18.111082  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:18.334085  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:18.340533  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:18.509531  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:18.611575  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:18.834025  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:18.839604  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:19.009429  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:19.111407  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:19.322232  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:15:19.333928  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:19.339329  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:19.509502  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:19.611264  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:19.833825  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:19.839271  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:20.009919  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:20.111843  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:20.334651  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:20.339136  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:20.509971  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:20.610739  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:20.835134  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:20.839798  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:20.999297  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:15:21.010610  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:21.112424  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:21.335235  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:21.340651  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:21.510188  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:21.611983  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:21.826926  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	W1019 12:15:21.832642  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:21.832712  295274 retry.go:31] will retry after 9.288094305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:21.835001  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:21.839909  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:22.010506  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:22.111224  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:22.334358  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:22.339893  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:22.510141  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:22.611009  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:22.834468  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:22.839869  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:23.033371  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:23.117072  295274 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 12:15:23.117092  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:23.378865  295274 node_ready.go:49] node "addons-694780" is "Ready"
	I1019 12:15:23.378892  295274 node_ready.go:38] duration metric: took 39.560317036s for node "addons-694780" to be "Ready" ...
	I1019 12:15:23.378908  295274 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:15:23.378966  295274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:15:23.384532  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:23.404228  295274 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 12:15:23.404253  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:23.409924  295274 api_server.go:72] duration metric: took 41.359450098s to wait for apiserver process to appear ...
	I1019 12:15:23.409950  295274 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:15:23.409971  295274 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1019 12:15:23.423321  295274 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1019 12:15:23.424534  295274 api_server.go:141] control plane version: v1.34.1
	I1019 12:15:23.424561  295274 api_server.go:131] duration metric: took 14.603146ms to wait for apiserver health ...
	I1019 12:15:23.424570  295274 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:15:23.441169  295274 system_pods.go:59] 19 kube-system pods found
	I1019 12:15:23.441213  295274 system_pods.go:61] "coredns-66bc5c9577-pmnfn" [bec1ffaa-adfa-4ec0-8900-094eb23c474c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:15:23.441223  295274 system_pods.go:61] "csi-hostpath-attacher-0" [f8da0a80-81fe-45d9-9bc4-546a88956349] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 12:15:23.441232  295274 system_pods.go:61] "csi-hostpath-resizer-0" [c8b31bdd-8168-41a1-8c0a-df79aea585b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 12:15:23.441237  295274 system_pods.go:61] "csi-hostpathplugin-qx76c" [06a5da30-8f06-481f-b8d9-f7c68e9dc1a5] Pending
	I1019 12:15:23.441244  295274 system_pods.go:61] "etcd-addons-694780" [58288863-2f47-4ab4-afeb-15a2a0cc2b72] Running
	I1019 12:15:23.441249  295274 system_pods.go:61] "kindnet-hbjtx" [17a70783-7bb2-4e04-87ff-29e9ae6157ec] Running
	I1019 12:15:23.441259  295274 system_pods.go:61] "kube-apiserver-addons-694780" [b8cf8d39-f915-4a03-b260-b53beeaa93ab] Running
	I1019 12:15:23.441264  295274 system_pods.go:61] "kube-controller-manager-addons-694780" [b9b890a0-4020-4659-a97d-606961e57787] Running
	I1019 12:15:23.441275  295274 system_pods.go:61] "kube-ingress-dns-minikube" [efc9b336-ddb4-4c69-9439-2a2d7435f8fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:15:23.441288  295274 system_pods.go:61] "kube-proxy-g2s4z" [2e13f778-44e4-41ee-b5dd-74ecd5c6ba75] Running
	I1019 12:15:23.441293  295274 system_pods.go:61] "kube-scheduler-addons-694780" [9ea837f2-390c-41a1-a839-836b1e1d5e70] Running
	I1019 12:15:23.441298  295274 system_pods.go:61] "metrics-server-85b7d694d7-qjfpt" [5a14d2c0-b959-4c84-86d6-2921e765a741] Pending
	I1019 12:15:23.441303  295274 system_pods.go:61] "nvidia-device-plugin-daemonset-rl6ct" [1169929a-70c6-44e8-a514-f532fb25a448] Pending
	I1019 12:15:23.441315  295274 system_pods.go:61] "registry-6b586f9694-cz995" [13fda06a-9f49-47a0-9b61-d3a6269e5357] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:15:23.441321  295274 system_pods.go:61] "registry-creds-764b6fb674-c7zhl" [13adddb6-d4bf-4eff-8eef-f96cbd11e787] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:15:23.441335  295274 system_pods.go:61] "registry-proxy-4r8wk" [0e1da561-db9d-4edf-ada6-d637df7913be] Pending
	I1019 12:15:23.441343  295274 system_pods.go:61] "snapshot-controller-7d9fbc56b8-slbnx" [a974aadd-de01-4b77-a455-661a00173306] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:23.441350  295274 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tpk9s" [8642e08b-acc7-4205-a1bf-ded7ee16625c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:23.441359  295274 system_pods.go:61] "storage-provisioner" [1608e4fc-9b1c-4b5e-bc5d-d20a14adf01d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:15:23.441367  295274 system_pods.go:74] duration metric: took 16.79045ms to wait for pod list to return data ...
	I1019 12:15:23.441383  295274 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:15:23.445300  295274 default_sa.go:45] found service account: "default"
	I1019 12:15:23.445327  295274 default_sa.go:55] duration metric: took 3.938468ms for default service account to be created ...
	I1019 12:15:23.445338  295274 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:15:23.460218  295274 system_pods.go:86] 19 kube-system pods found
	I1019 12:15:23.460257  295274 system_pods.go:89] "coredns-66bc5c9577-pmnfn" [bec1ffaa-adfa-4ec0-8900-094eb23c474c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:15:23.460267  295274 system_pods.go:89] "csi-hostpath-attacher-0" [f8da0a80-81fe-45d9-9bc4-546a88956349] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 12:15:23.460276  295274 system_pods.go:89] "csi-hostpath-resizer-0" [c8b31bdd-8168-41a1-8c0a-df79aea585b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 12:15:23.460280  295274 system_pods.go:89] "csi-hostpathplugin-qx76c" [06a5da30-8f06-481f-b8d9-f7c68e9dc1a5] Pending
	I1019 12:15:23.460285  295274 system_pods.go:89] "etcd-addons-694780" [58288863-2f47-4ab4-afeb-15a2a0cc2b72] Running
	I1019 12:15:23.460290  295274 system_pods.go:89] "kindnet-hbjtx" [17a70783-7bb2-4e04-87ff-29e9ae6157ec] Running
	I1019 12:15:23.460299  295274 system_pods.go:89] "kube-apiserver-addons-694780" [b8cf8d39-f915-4a03-b260-b53beeaa93ab] Running
	I1019 12:15:23.460305  295274 system_pods.go:89] "kube-controller-manager-addons-694780" [b9b890a0-4020-4659-a97d-606961e57787] Running
	I1019 12:15:23.460314  295274 system_pods.go:89] "kube-ingress-dns-minikube" [efc9b336-ddb4-4c69-9439-2a2d7435f8fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:15:23.460319  295274 system_pods.go:89] "kube-proxy-g2s4z" [2e13f778-44e4-41ee-b5dd-74ecd5c6ba75] Running
	I1019 12:15:23.460329  295274 system_pods.go:89] "kube-scheduler-addons-694780" [9ea837f2-390c-41a1-a839-836b1e1d5e70] Running
	I1019 12:15:23.460333  295274 system_pods.go:89] "metrics-server-85b7d694d7-qjfpt" [5a14d2c0-b959-4c84-86d6-2921e765a741] Pending
	I1019 12:15:23.460337  295274 system_pods.go:89] "nvidia-device-plugin-daemonset-rl6ct" [1169929a-70c6-44e8-a514-f532fb25a448] Pending
	I1019 12:15:23.460343  295274 system_pods.go:89] "registry-6b586f9694-cz995" [13fda06a-9f49-47a0-9b61-d3a6269e5357] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:15:23.460353  295274 system_pods.go:89] "registry-creds-764b6fb674-c7zhl" [13adddb6-d4bf-4eff-8eef-f96cbd11e787] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:15:23.460358  295274 system_pods.go:89] "registry-proxy-4r8wk" [0e1da561-db9d-4edf-ada6-d637df7913be] Pending
	I1019 12:15:23.460364  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-slbnx" [a974aadd-de01-4b77-a455-661a00173306] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:23.460375  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tpk9s" [8642e08b-acc7-4205-a1bf-ded7ee16625c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:23.460381  295274 system_pods.go:89] "storage-provisioner" [1608e4fc-9b1c-4b5e-bc5d-d20a14adf01d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:15:23.460397  295274 retry.go:31] will retry after 255.328592ms: missing components: kube-dns
	I1019 12:15:23.523688  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:23.617333  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:23.736116  295274 system_pods.go:86] 19 kube-system pods found
	I1019 12:15:23.736166  295274 system_pods.go:89] "coredns-66bc5c9577-pmnfn" [bec1ffaa-adfa-4ec0-8900-094eb23c474c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:15:23.736175  295274 system_pods.go:89] "csi-hostpath-attacher-0" [f8da0a80-81fe-45d9-9bc4-546a88956349] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 12:15:23.736185  295274 system_pods.go:89] "csi-hostpath-resizer-0" [c8b31bdd-8168-41a1-8c0a-df79aea585b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 12:15:23.736193  295274 system_pods.go:89] "csi-hostpathplugin-qx76c" [06a5da30-8f06-481f-b8d9-f7c68e9dc1a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 12:15:23.736198  295274 system_pods.go:89] "etcd-addons-694780" [58288863-2f47-4ab4-afeb-15a2a0cc2b72] Running
	I1019 12:15:23.736203  295274 system_pods.go:89] "kindnet-hbjtx" [17a70783-7bb2-4e04-87ff-29e9ae6157ec] Running
	I1019 12:15:23.736213  295274 system_pods.go:89] "kube-apiserver-addons-694780" [b8cf8d39-f915-4a03-b260-b53beeaa93ab] Running
	I1019 12:15:23.736218  295274 system_pods.go:89] "kube-controller-manager-addons-694780" [b9b890a0-4020-4659-a97d-606961e57787] Running
	I1019 12:15:23.736229  295274 system_pods.go:89] "kube-ingress-dns-minikube" [efc9b336-ddb4-4c69-9439-2a2d7435f8fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:15:23.736233  295274 system_pods.go:89] "kube-proxy-g2s4z" [2e13f778-44e4-41ee-b5dd-74ecd5c6ba75] Running
	I1019 12:15:23.736238  295274 system_pods.go:89] "kube-scheduler-addons-694780" [9ea837f2-390c-41a1-a839-836b1e1d5e70] Running
	I1019 12:15:23.736244  295274 system_pods.go:89] "metrics-server-85b7d694d7-qjfpt" [5a14d2c0-b959-4c84-86d6-2921e765a741] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:15:23.736257  295274 system_pods.go:89] "nvidia-device-plugin-daemonset-rl6ct" [1169929a-70c6-44e8-a514-f532fb25a448] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 12:15:23.736266  295274 system_pods.go:89] "registry-6b586f9694-cz995" [13fda06a-9f49-47a0-9b61-d3a6269e5357] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:15:23.736277  295274 system_pods.go:89] "registry-creds-764b6fb674-c7zhl" [13adddb6-d4bf-4eff-8eef-f96cbd11e787] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:15:23.736283  295274 system_pods.go:89] "registry-proxy-4r8wk" [0e1da561-db9d-4edf-ada6-d637df7913be] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 12:15:23.736290  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-slbnx" [a974aadd-de01-4b77-a455-661a00173306] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:23.736297  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tpk9s" [8642e08b-acc7-4205-a1bf-ded7ee16625c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:23.736310  295274 system_pods.go:89] "storage-provisioner" [1608e4fc-9b1c-4b5e-bc5d-d20a14adf01d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:15:23.736330  295274 retry.go:31] will retry after 304.376177ms: missing components: kube-dns
	I1019 12:15:23.837198  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:23.937744  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:24.039000  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:24.048786  295274 system_pods.go:86] 19 kube-system pods found
	I1019 12:15:24.048827  295274 system_pods.go:89] "coredns-66bc5c9577-pmnfn" [bec1ffaa-adfa-4ec0-8900-094eb23c474c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:15:24.048837  295274 system_pods.go:89] "csi-hostpath-attacher-0" [f8da0a80-81fe-45d9-9bc4-546a88956349] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 12:15:24.048844  295274 system_pods.go:89] "csi-hostpath-resizer-0" [c8b31bdd-8168-41a1-8c0a-df79aea585b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 12:15:24.048850  295274 system_pods.go:89] "csi-hostpathplugin-qx76c" [06a5da30-8f06-481f-b8d9-f7c68e9dc1a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 12:15:24.048855  295274 system_pods.go:89] "etcd-addons-694780" [58288863-2f47-4ab4-afeb-15a2a0cc2b72] Running
	I1019 12:15:24.048861  295274 system_pods.go:89] "kindnet-hbjtx" [17a70783-7bb2-4e04-87ff-29e9ae6157ec] Running
	I1019 12:15:24.048869  295274 system_pods.go:89] "kube-apiserver-addons-694780" [b8cf8d39-f915-4a03-b260-b53beeaa93ab] Running
	I1019 12:15:24.048874  295274 system_pods.go:89] "kube-controller-manager-addons-694780" [b9b890a0-4020-4659-a97d-606961e57787] Running
	I1019 12:15:24.048886  295274 system_pods.go:89] "kube-ingress-dns-minikube" [efc9b336-ddb4-4c69-9439-2a2d7435f8fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:15:24.048890  295274 system_pods.go:89] "kube-proxy-g2s4z" [2e13f778-44e4-41ee-b5dd-74ecd5c6ba75] Running
	I1019 12:15:24.048895  295274 system_pods.go:89] "kube-scheduler-addons-694780" [9ea837f2-390c-41a1-a839-836b1e1d5e70] Running
	I1019 12:15:24.048901  295274 system_pods.go:89] "metrics-server-85b7d694d7-qjfpt" [5a14d2c0-b959-4c84-86d6-2921e765a741] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:15:24.048912  295274 system_pods.go:89] "nvidia-device-plugin-daemonset-rl6ct" [1169929a-70c6-44e8-a514-f532fb25a448] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 12:15:24.048920  295274 system_pods.go:89] "registry-6b586f9694-cz995" [13fda06a-9f49-47a0-9b61-d3a6269e5357] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:15:24.048938  295274 system_pods.go:89] "registry-creds-764b6fb674-c7zhl" [13adddb6-d4bf-4eff-8eef-f96cbd11e787] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:15:24.048944  295274 system_pods.go:89] "registry-proxy-4r8wk" [0e1da561-db9d-4edf-ada6-d637df7913be] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 12:15:24.048951  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-slbnx" [a974aadd-de01-4b77-a455-661a00173306] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:24.048960  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tpk9s" [8642e08b-acc7-4205-a1bf-ded7ee16625c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:24.048966  295274 system_pods.go:89] "storage-provisioner" [1608e4fc-9b1c-4b5e-bc5d-d20a14adf01d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:15:24.048988  295274 retry.go:31] will retry after 401.197866ms: missing components: kube-dns
	I1019 12:15:24.140406  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:24.335503  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:24.339440  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:24.456162  295274 system_pods.go:86] 19 kube-system pods found
	I1019 12:15:24.456197  295274 system_pods.go:89] "coredns-66bc5c9577-pmnfn" [bec1ffaa-adfa-4ec0-8900-094eb23c474c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:15:24.456206  295274 system_pods.go:89] "csi-hostpath-attacher-0" [f8da0a80-81fe-45d9-9bc4-546a88956349] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 12:15:24.456214  295274 system_pods.go:89] "csi-hostpath-resizer-0" [c8b31bdd-8168-41a1-8c0a-df79aea585b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 12:15:24.456220  295274 system_pods.go:89] "csi-hostpathplugin-qx76c" [06a5da30-8f06-481f-b8d9-f7c68e9dc1a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 12:15:24.456224  295274 system_pods.go:89] "etcd-addons-694780" [58288863-2f47-4ab4-afeb-15a2a0cc2b72] Running
	I1019 12:15:24.456230  295274 system_pods.go:89] "kindnet-hbjtx" [17a70783-7bb2-4e04-87ff-29e9ae6157ec] Running
	I1019 12:15:24.456235  295274 system_pods.go:89] "kube-apiserver-addons-694780" [b8cf8d39-f915-4a03-b260-b53beeaa93ab] Running
	I1019 12:15:24.456244  295274 system_pods.go:89] "kube-controller-manager-addons-694780" [b9b890a0-4020-4659-a97d-606961e57787] Running
	I1019 12:15:24.456437  295274 system_pods.go:89] "kube-ingress-dns-minikube" [efc9b336-ddb4-4c69-9439-2a2d7435f8fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:15:24.456454  295274 system_pods.go:89] "kube-proxy-g2s4z" [2e13f778-44e4-41ee-b5dd-74ecd5c6ba75] Running
	I1019 12:15:24.456463  295274 system_pods.go:89] "kube-scheduler-addons-694780" [9ea837f2-390c-41a1-a839-836b1e1d5e70] Running
	I1019 12:15:24.456471  295274 system_pods.go:89] "metrics-server-85b7d694d7-qjfpt" [5a14d2c0-b959-4c84-86d6-2921e765a741] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:15:24.456484  295274 system_pods.go:89] "nvidia-device-plugin-daemonset-rl6ct" [1169929a-70c6-44e8-a514-f532fb25a448] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 12:15:24.456492  295274 system_pods.go:89] "registry-6b586f9694-cz995" [13fda06a-9f49-47a0-9b61-d3a6269e5357] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:15:24.456499  295274 system_pods.go:89] "registry-creds-764b6fb674-c7zhl" [13adddb6-d4bf-4eff-8eef-f96cbd11e787] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:15:24.456508  295274 system_pods.go:89] "registry-proxy-4r8wk" [0e1da561-db9d-4edf-ada6-d637df7913be] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 12:15:24.456516  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-slbnx" [a974aadd-de01-4b77-a455-661a00173306] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:24.456526  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tpk9s" [8642e08b-acc7-4205-a1bf-ded7ee16625c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:24.456533  295274 system_pods.go:89] "storage-provisioner" [1608e4fc-9b1c-4b5e-bc5d-d20a14adf01d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:15:24.456563  295274 retry.go:31] will retry after 379.97275ms: missing components: kube-dns
	I1019 12:15:24.509665  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:24.611932  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:24.835072  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:24.842364  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:24.842698  295274 system_pods.go:86] 19 kube-system pods found
	I1019 12:15:24.842722  295274 system_pods.go:89] "coredns-66bc5c9577-pmnfn" [bec1ffaa-adfa-4ec0-8900-094eb23c474c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:15:24.842730  295274 system_pods.go:89] "csi-hostpath-attacher-0" [f8da0a80-81fe-45d9-9bc4-546a88956349] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 12:15:24.842752  295274 system_pods.go:89] "csi-hostpath-resizer-0" [c8b31bdd-8168-41a1-8c0a-df79aea585b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 12:15:24.842759  295274 system_pods.go:89] "csi-hostpathplugin-qx76c" [06a5da30-8f06-481f-b8d9-f7c68e9dc1a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 12:15:24.842764  295274 system_pods.go:89] "etcd-addons-694780" [58288863-2f47-4ab4-afeb-15a2a0cc2b72] Running
	I1019 12:15:24.842770  295274 system_pods.go:89] "kindnet-hbjtx" [17a70783-7bb2-4e04-87ff-29e9ae6157ec] Running
	I1019 12:15:24.842774  295274 system_pods.go:89] "kube-apiserver-addons-694780" [b8cf8d39-f915-4a03-b260-b53beeaa93ab] Running
	I1019 12:15:24.842782  295274 system_pods.go:89] "kube-controller-manager-addons-694780" [b9b890a0-4020-4659-a97d-606961e57787] Running
	I1019 12:15:24.842788  295274 system_pods.go:89] "kube-ingress-dns-minikube" [efc9b336-ddb4-4c69-9439-2a2d7435f8fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:15:24.842794  295274 system_pods.go:89] "kube-proxy-g2s4z" [2e13f778-44e4-41ee-b5dd-74ecd5c6ba75] Running
	I1019 12:15:24.842799  295274 system_pods.go:89] "kube-scheduler-addons-694780" [9ea837f2-390c-41a1-a839-836b1e1d5e70] Running
	I1019 12:15:24.842805  295274 system_pods.go:89] "metrics-server-85b7d694d7-qjfpt" [5a14d2c0-b959-4c84-86d6-2921e765a741] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:15:24.842812  295274 system_pods.go:89] "nvidia-device-plugin-daemonset-rl6ct" [1169929a-70c6-44e8-a514-f532fb25a448] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 12:15:24.842819  295274 system_pods.go:89] "registry-6b586f9694-cz995" [13fda06a-9f49-47a0-9b61-d3a6269e5357] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:15:24.842825  295274 system_pods.go:89] "registry-creds-764b6fb674-c7zhl" [13adddb6-d4bf-4eff-8eef-f96cbd11e787] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:15:24.842832  295274 system_pods.go:89] "registry-proxy-4r8wk" [0e1da561-db9d-4edf-ada6-d637df7913be] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 12:15:24.842840  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-slbnx" [a974aadd-de01-4b77-a455-661a00173306] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:24.842848  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tpk9s" [8642e08b-acc7-4205-a1bf-ded7ee16625c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:24.842854  295274 system_pods.go:89] "storage-provisioner" [1608e4fc-9b1c-4b5e-bc5d-d20a14adf01d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:15:24.842871  295274 retry.go:31] will retry after 571.269725ms: missing components: kube-dns
	I1019 12:15:25.010624  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:25.112004  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:25.334437  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:25.345300  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:25.419287  295274 system_pods.go:86] 19 kube-system pods found
	I1019 12:15:25.419326  295274 system_pods.go:89] "coredns-66bc5c9577-pmnfn" [bec1ffaa-adfa-4ec0-8900-094eb23c474c] Running
	I1019 12:15:25.419337  295274 system_pods.go:89] "csi-hostpath-attacher-0" [f8da0a80-81fe-45d9-9bc4-546a88956349] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 12:15:25.419346  295274 system_pods.go:89] "csi-hostpath-resizer-0" [c8b31bdd-8168-41a1-8c0a-df79aea585b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 12:15:25.419353  295274 system_pods.go:89] "csi-hostpathplugin-qx76c" [06a5da30-8f06-481f-b8d9-f7c68e9dc1a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 12:15:25.419359  295274 system_pods.go:89] "etcd-addons-694780" [58288863-2f47-4ab4-afeb-15a2a0cc2b72] Running
	I1019 12:15:25.419365  295274 system_pods.go:89] "kindnet-hbjtx" [17a70783-7bb2-4e04-87ff-29e9ae6157ec] Running
	I1019 12:15:25.419369  295274 system_pods.go:89] "kube-apiserver-addons-694780" [b8cf8d39-f915-4a03-b260-b53beeaa93ab] Running
	I1019 12:15:25.419373  295274 system_pods.go:89] "kube-controller-manager-addons-694780" [b9b890a0-4020-4659-a97d-606961e57787] Running
	I1019 12:15:25.419387  295274 system_pods.go:89] "kube-ingress-dns-minikube" [efc9b336-ddb4-4c69-9439-2a2d7435f8fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:15:25.419397  295274 system_pods.go:89] "kube-proxy-g2s4z" [2e13f778-44e4-41ee-b5dd-74ecd5c6ba75] Running
	I1019 12:15:25.419402  295274 system_pods.go:89] "kube-scheduler-addons-694780" [9ea837f2-390c-41a1-a839-836b1e1d5e70] Running
	I1019 12:15:25.419411  295274 system_pods.go:89] "metrics-server-85b7d694d7-qjfpt" [5a14d2c0-b959-4c84-86d6-2921e765a741] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:15:25.419421  295274 system_pods.go:89] "nvidia-device-plugin-daemonset-rl6ct" [1169929a-70c6-44e8-a514-f532fb25a448] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 12:15:25.419427  295274 system_pods.go:89] "registry-6b586f9694-cz995" [13fda06a-9f49-47a0-9b61-d3a6269e5357] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:15:25.419434  295274 system_pods.go:89] "registry-creds-764b6fb674-c7zhl" [13adddb6-d4bf-4eff-8eef-f96cbd11e787] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:15:25.419456  295274 system_pods.go:89] "registry-proxy-4r8wk" [0e1da561-db9d-4edf-ada6-d637df7913be] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 12:15:25.419463  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-slbnx" [a974aadd-de01-4b77-a455-661a00173306] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:25.419475  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tpk9s" [8642e08b-acc7-4205-a1bf-ded7ee16625c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:25.419479  295274 system_pods.go:89] "storage-provisioner" [1608e4fc-9b1c-4b5e-bc5d-d20a14adf01d] Running
	I1019 12:15:25.419493  295274 system_pods.go:126] duration metric: took 1.974147977s to wait for k8s-apps to be running ...
	I1019 12:15:25.419503  295274 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:15:25.419562  295274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:15:25.432842  295274 system_svc.go:56] duration metric: took 13.329938ms WaitForService to wait for kubelet
	I1019 12:15:25.432871  295274 kubeadm.go:586] duration metric: took 43.382402791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:15:25.432893  295274 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:15:25.435878  295274 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 12:15:25.435958  295274 node_conditions.go:123] node cpu capacity is 2
	I1019 12:15:25.435972  295274 node_conditions.go:105] duration metric: took 3.072988ms to run NodePressure ...
	I1019 12:15:25.435984  295274 start.go:241] waiting for startup goroutines ...
	I1019 12:15:25.509932  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:25.611637  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:25.834420  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:25.839728  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:26.010605  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:26.112599  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:26.335217  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:26.340378  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:26.510563  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:26.611884  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:26.835238  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:26.840059  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:27.012072  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:27.112925  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:27.334258  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:27.340053  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:27.510561  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:27.611851  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:27.833874  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:27.839477  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:28.009250  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:28.111773  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:28.335025  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:28.339951  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:28.510124  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:28.611891  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:28.834479  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:28.839424  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:29.009531  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:29.111715  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:29.334400  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:29.339832  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:29.510742  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:29.612457  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:29.835752  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:29.840098  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:30.011480  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:30.112981  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:30.334769  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:30.340122  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:30.510423  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:30.612172  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:30.834721  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:30.839449  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:31.009792  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:31.112489  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:31.121125  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:15:31.334636  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:31.339529  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:31.509779  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:31.612475  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:31.838829  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:31.841701  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:32.010777  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:32.112109  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:32.150189  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.028968676s)
	W1019 12:15:32.150221  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:32.150241  295274 retry.go:31] will retry after 12.156396731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:32.335890  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:32.343474  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:32.510071  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:32.611081  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:32.834594  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:32.839808  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:33.010581  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:33.111959  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:33.335207  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:33.340456  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:33.509940  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:33.612220  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:33.834697  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:33.839350  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:34.010654  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:34.112737  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:34.334511  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:34.339353  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:34.510260  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:34.611426  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:34.834899  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:34.839829  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:35.011614  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:35.112344  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:35.334460  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:35.348661  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:35.509841  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:35.612251  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:35.835671  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:35.839361  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:36.011491  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:36.112375  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:36.337856  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:36.341710  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:36.510019  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:36.611732  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:36.835571  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:36.840129  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:37.011071  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:37.111709  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:37.335071  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:37.340334  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:37.510593  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:37.612029  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:37.834468  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:37.839337  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:38.010949  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:38.111810  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:38.334228  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:38.340575  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:38.510264  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:38.611534  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:38.835369  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:38.839050  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:39.010546  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:39.111864  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:39.348966  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:39.349383  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:39.511027  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:39.611453  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:39.835182  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:39.840068  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:40.015734  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:40.111999  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:40.339474  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:40.340629  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:40.509732  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:40.612473  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:40.835108  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:40.840417  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:41.012483  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:41.111276  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:41.334625  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:41.343441  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:41.510687  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:41.612429  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:41.834846  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:41.839924  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:42.011700  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:42.114176  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:42.335966  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:42.340112  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:42.510750  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:42.613030  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:42.834639  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:42.839482  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:43.011021  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:43.112195  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:43.334724  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:43.339609  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:43.510818  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:43.611843  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:43.835564  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:43.839249  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:44.010535  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:44.112249  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:44.307694  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:15:44.334719  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:44.339771  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:44.510370  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:44.611881  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:44.835427  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:44.840133  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:45.010392  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:45.111871  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:45.341037  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:45.343670  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:45.414612  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.106822707s)
	W1019 12:15:45.414701  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:45.414736  295274 retry.go:31] will retry after 22.883577744s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:45.509866  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:45.612244  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:45.834781  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:45.839708  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:46.010497  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:46.112377  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:46.335483  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:46.342369  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:46.510770  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:46.612451  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:46.835044  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:46.840088  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:47.010670  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:47.112454  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:47.334841  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:47.339949  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:47.510319  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:47.612165  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:47.834145  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:47.840467  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:48.011383  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:48.111863  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:48.334283  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:48.339687  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:48.524545  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:48.611942  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:48.835225  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:48.843026  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:49.017738  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:49.118838  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:49.363825  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:49.364010  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:49.514378  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:49.614652  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:49.835411  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:49.839256  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:50.012650  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:50.112553  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:50.335597  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:50.339995  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:50.516350  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:50.611481  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:50.834543  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:50.839802  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:51.010420  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:51.112127  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:51.387961  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:51.388347  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:51.511581  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:51.612134  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:51.834600  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:51.839996  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:52.010709  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:52.112190  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:52.335100  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:52.340248  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:52.511029  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:52.611890  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:52.835348  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:52.839660  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:53.009887  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:53.111314  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:53.335115  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:53.340343  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:53.509998  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:53.611724  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:53.835958  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:53.840259  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:54.012632  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:54.151352  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:54.335646  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:54.340925  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:54.510636  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:54.612386  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:54.835029  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:54.840560  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:55.010572  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:55.112039  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:55.334215  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:55.340252  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:55.510202  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:55.611678  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:55.835403  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:55.839403  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:56.009563  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:56.111722  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:56.335303  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:56.338697  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:56.510014  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:56.611666  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:56.835017  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:56.840019  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:57.011252  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:57.111631  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:57.335119  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:57.340138  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:57.510539  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:57.611765  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:57.834963  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:57.839774  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:58.010060  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:58.111227  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:58.334467  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:58.339712  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:58.510419  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:58.612355  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:58.842645  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:58.842732  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:59.010152  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:59.113611  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:59.334580  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:59.339313  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:59.512453  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:59.612121  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:59.834602  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:59.839505  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:00.011576  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:00.126618  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:00.336448  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:00.348181  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:00.511138  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:00.611722  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:00.834937  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:00.839822  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:01.009999  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:01.111727  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:01.334337  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:01.340901  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:01.510279  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:01.611745  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:01.834708  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:01.839686  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:02.010881  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:02.111749  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:02.335272  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:02.340199  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:02.510509  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:02.612216  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:02.834560  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:02.839146  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:03.010520  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:03.111503  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:03.334713  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:03.339834  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:03.509634  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:03.611758  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:03.834558  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:03.838947  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:04.010349  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:04.111679  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:04.335066  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:04.340075  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:04.510611  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:04.612260  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:04.834669  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:04.839630  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:05.010184  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:05.111848  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:05.335276  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:05.338958  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:05.518404  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:05.611459  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:05.834936  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:05.839677  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:06.010896  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:06.112226  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:06.334654  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:06.340664  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:06.510120  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:06.614683  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:06.835597  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:06.839995  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:07.010136  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:07.112719  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:07.337776  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:07.347081  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:07.515756  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:07.632539  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:07.846506  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:07.847292  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:08.015674  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:08.116093  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:08.299199  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:16:08.340342  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:08.344472  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:08.516955  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:08.619581  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:08.835511  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:08.840333  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:09.012689  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:09.114188  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:09.335198  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:09.340266  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:09.431421  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.132126526s)
	W1019 12:16:09.431460  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:16:09.431498  295274 retry.go:31] will retry after 32.744760924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:16:09.514140  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:09.611601  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:09.834378  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:09.839190  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:10.019000  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:10.112234  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:10.334895  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:10.339303  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:10.511133  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:10.611602  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:10.834738  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:10.839557  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:11.010363  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:11.112626  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:11.335022  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:11.340828  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:11.510113  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:11.618621  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:11.835306  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:11.839241  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:12.009940  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:12.111188  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:12.335953  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:12.344104  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:12.510739  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:12.612189  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:12.834987  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:12.839718  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:13.009863  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:13.113230  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:13.335818  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:13.339354  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:13.509240  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:13.612238  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:13.835150  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:13.840258  295274 kapi.go:107] duration metric: took 1m25.503924007s to wait for kubernetes.io/minikube-addons=registry ...
	I1019 12:16:14.011359  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:14.112047  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:14.333916  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:14.510125  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:14.611595  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:14.834540  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:15.009724  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:15.112102  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:15.333938  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:15.509941  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:15.611663  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:15.834756  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:16.009712  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:16.111597  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:16.334495  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:16.509988  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:16.611003  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:16.834291  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:17.009624  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:17.111651  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:17.333740  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:17.509820  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:17.611433  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:17.835061  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:18.009878  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:18.111292  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:18.334620  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:18.511719  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:18.612518  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:18.834776  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:19.010030  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:19.111380  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:19.334578  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:19.510151  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:19.611689  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:19.834972  295274 kapi.go:107] duration metric: took 1m31.504099622s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1019 12:16:20.010224  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:20.111167  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:20.509856  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:20.731109  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:21.010476  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:21.111911  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:21.509794  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:21.612483  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:22.010763  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:22.112541  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:22.510278  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:22.611418  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:23.011585  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:23.117607  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:23.511057  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:23.621778  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:24.012497  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:24.112241  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:24.516433  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:24.612114  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:25.010705  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:25.111892  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:25.512418  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:25.611483  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:26.010117  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:26.111598  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:26.510190  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:26.616081  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:27.011003  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:27.111801  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:27.510203  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:27.611631  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:28.009483  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:28.116351  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:28.510523  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:28.612070  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:29.009654  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:29.112029  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:29.510761  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:29.613252  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:30.013796  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:30.113584  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:30.509632  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:30.613044  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:31.010528  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:31.112924  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:31.511497  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:31.622650  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:32.011140  295274 kapi.go:107] duration metric: took 1m40.504611017s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1019 12:16:32.062201  295274 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-694780 cluster.
	I1019 12:16:32.093979  295274 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1019 12:16:32.111474  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:32.151876  295274 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1019 12:16:32.612031  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:33.112224  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:33.615957  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:34.112405  295274 kapi.go:107] duration metric: took 1m45.504479047s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1019 12:16:42.177237  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1019 12:16:43.001174  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 12:16:43.001279  295274 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1019 12:16:43.005385  295274 out.go:179] * Enabled addons: amd-gpu-device-plugin, ingress-dns, cloud-spanner, registry-creds, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1019 12:16:43.008345  295274 addons.go:514] duration metric: took 2m0.958670498s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns cloud-spanner registry-creds storage-provisioner nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1019 12:16:43.008415  295274 start.go:246] waiting for cluster config update ...
	I1019 12:16:43.008444  295274 start.go:255] writing updated cluster config ...
	I1019 12:16:43.008777  295274 ssh_runner.go:195] Run: rm -f paused
	I1019 12:16:43.013485  295274 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:16:43.017923  295274 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pmnfn" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:43.024726  295274 pod_ready.go:94] pod "coredns-66bc5c9577-pmnfn" is "Ready"
	I1019 12:16:43.024757  295274 pod_ready.go:86] duration metric: took 6.756337ms for pod "coredns-66bc5c9577-pmnfn" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:43.027523  295274 pod_ready.go:83] waiting for pod "etcd-addons-694780" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:43.032691  295274 pod_ready.go:94] pod "etcd-addons-694780" is "Ready"
	I1019 12:16:43.032719  295274 pod_ready.go:86] duration metric: took 5.167491ms for pod "etcd-addons-694780" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:43.035194  295274 pod_ready.go:83] waiting for pod "kube-apiserver-addons-694780" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:43.040128  295274 pod_ready.go:94] pod "kube-apiserver-addons-694780" is "Ready"
	I1019 12:16:43.040159  295274 pod_ready.go:86] duration metric: took 4.938679ms for pod "kube-apiserver-addons-694780" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:43.042836  295274 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-694780" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:43.417081  295274 pod_ready.go:94] pod "kube-controller-manager-addons-694780" is "Ready"
	I1019 12:16:43.417114  295274 pod_ready.go:86] duration metric: took 374.247577ms for pod "kube-controller-manager-addons-694780" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:43.619317  295274 pod_ready.go:83] waiting for pod "kube-proxy-g2s4z" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:44.017652  295274 pod_ready.go:94] pod "kube-proxy-g2s4z" is "Ready"
	I1019 12:16:44.017752  295274 pod_ready.go:86] duration metric: took 398.402857ms for pod "kube-proxy-g2s4z" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:44.218532  295274 pod_ready.go:83] waiting for pod "kube-scheduler-addons-694780" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:44.617541  295274 pod_ready.go:94] pod "kube-scheduler-addons-694780" is "Ready"
	I1019 12:16:44.617571  295274 pod_ready.go:86] duration metric: took 399.002717ms for pod "kube-scheduler-addons-694780" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:44.617584  295274 pod_ready.go:40] duration metric: took 1.604062784s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:16:44.689461  295274 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 12:16:44.692767  295274 out.go:179] * Done! kubectl is now configured to use "addons-694780" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 12:19:36 addons-694780 crio[831]: time="2025-10-19T12:19:36.384898022Z" level=info msg="Removed container 6bbfe6ec790c726ec963a065ddc41c3d780ada035006853ee184438846c47885: kube-system/registry-creds-764b6fb674-c7zhl/registry-creds" id=0340916b-e04b-4f2f-bb73-32c4f7fcabb2 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:19:43 addons-694780 crio[831]: time="2025-10-19T12:19:43.464499159Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-5bls4/POD" id=54340bf2-df00-42e5-896f-3067a3d52fe8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:19:43 addons-694780 crio[831]: time="2025-10-19T12:19:43.464572234Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:19:43 addons-694780 crio[831]: time="2025-10-19T12:19:43.471734452Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-5bls4 Namespace:default ID:f0e4b2a1c29e2b0c8aab575ff873e77bf799900ad0409e063906749465136c49 UID:553d80d2-8177-410b-bf7b-4558b6423147 NetNS:/var/run/netns/a5b4eb72-7a09-4a2a-9347-428b75210c10 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078cb0}] Aliases:map[]}"
	Oct 19 12:19:43 addons-694780 crio[831]: time="2025-10-19T12:19:43.471773115Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-5bls4 to CNI network \"kindnet\" (type=ptp)"
	Oct 19 12:19:43 addons-694780 crio[831]: time="2025-10-19T12:19:43.488858042Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-5bls4 Namespace:default ID:f0e4b2a1c29e2b0c8aab575ff873e77bf799900ad0409e063906749465136c49 UID:553d80d2-8177-410b-bf7b-4558b6423147 NetNS:/var/run/netns/a5b4eb72-7a09-4a2a-9347-428b75210c10 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078cb0}] Aliases:map[]}"
	Oct 19 12:19:43 addons-694780 crio[831]: time="2025-10-19T12:19:43.489012899Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-5bls4 for CNI network kindnet (type=ptp)"
	Oct 19 12:19:43 addons-694780 crio[831]: time="2025-10-19T12:19:43.495315331Z" level=info msg="Ran pod sandbox f0e4b2a1c29e2b0c8aab575ff873e77bf799900ad0409e063906749465136c49 with infra container: default/hello-world-app-5d498dc89-5bls4/POD" id=54340bf2-df00-42e5-896f-3067a3d52fe8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:19:43 addons-694780 crio[831]: time="2025-10-19T12:19:43.499085115Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9a95697f-7768-4ccd-a9cb-b9e3e7b8fdf1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:19:43 addons-694780 crio[831]: time="2025-10-19T12:19:43.499370535Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=9a95697f-7768-4ccd-a9cb-b9e3e7b8fdf1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:19:43 addons-694780 crio[831]: time="2025-10-19T12:19:43.499486524Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=9a95697f-7768-4ccd-a9cb-b9e3e7b8fdf1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:19:43 addons-694780 crio[831]: time="2025-10-19T12:19:43.502921017Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=0864c09f-e431-4225-9258-f6426a36e121 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:19:43 addons-694780 crio[831]: time="2025-10-19T12:19:43.508228054Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 19 12:19:44 addons-694780 crio[831]: time="2025-10-19T12:19:44.173866861Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=0864c09f-e431-4225-9258-f6426a36e121 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:19:44 addons-694780 crio[831]: time="2025-10-19T12:19:44.174582293Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9d9e62d1-60df-4d5d-82c2-97ff76a6b2e5 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:19:44 addons-694780 crio[831]: time="2025-10-19T12:19:44.178833167Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=a176b7fa-bb64-4e7c-8aed-c306ea822844 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:19:44 addons-694780 crio[831]: time="2025-10-19T12:19:44.187282172Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-5bls4/hello-world-app" id=a3d6f1b9-ca7b-467d-a2a4-51f2cc6b35dc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:19:44 addons-694780 crio[831]: time="2025-10-19T12:19:44.188362639Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:19:44 addons-694780 crio[831]: time="2025-10-19T12:19:44.196534307Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:19:44 addons-694780 crio[831]: time="2025-10-19T12:19:44.196902271Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/df5a7e68856cde6edef7e30a155494c004b679d0d29d6c6d54b17ce3c5f8cdb1/merged/etc/passwd: no such file or directory"
	Oct 19 12:19:44 addons-694780 crio[831]: time="2025-10-19T12:19:44.197209754Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/df5a7e68856cde6edef7e30a155494c004b679d0d29d6c6d54b17ce3c5f8cdb1/merged/etc/group: no such file or directory"
	Oct 19 12:19:44 addons-694780 crio[831]: time="2025-10-19T12:19:44.197586753Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:19:44 addons-694780 crio[831]: time="2025-10-19T12:19:44.235390076Z" level=info msg="Created container 8babbe1fbfc6c19273ffb7edc3e81d24e2a62f96bcd73152c8913dd0a3828e76: default/hello-world-app-5d498dc89-5bls4/hello-world-app" id=a3d6f1b9-ca7b-467d-a2a4-51f2cc6b35dc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:19:44 addons-694780 crio[831]: time="2025-10-19T12:19:44.239668922Z" level=info msg="Starting container: 8babbe1fbfc6c19273ffb7edc3e81d24e2a62f96bcd73152c8913dd0a3828e76" id=05a5fea8-fde9-489b-81ab-ef1139dfec4b name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:19:44 addons-694780 crio[831]: time="2025-10-19T12:19:44.246995113Z" level=info msg="Started container" PID=7252 containerID=8babbe1fbfc6c19273ffb7edc3e81d24e2a62f96bcd73152c8913dd0a3828e76 description=default/hello-world-app-5d498dc89-5bls4/hello-world-app id=05a5fea8-fde9-489b-81ab-ef1139dfec4b name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0e4b2a1c29e2b0c8aab575ff873e77bf799900ad0409e063906749465136c49
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	8babbe1fbfc6c       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        1 second ago        Running             hello-world-app                          0                   f0e4b2a1c29e2       hello-world-app-5d498dc89-5bls4             default
	d7ea38ed1b5c7       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             9 seconds ago       Exited              registry-creds                           1                   14e9b731c86a5       registry-creds-764b6fb674-c7zhl             kube-system
	dc2a6fbeb03f3       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago       Running             nginx                                    0                   1ce9dd1715412       nginx                                       default
	88e7946bdf82d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago       Running             busybox                                  0                   8aa3fa57da3db       busybox                                     default
	babbcf90f6ac9       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago       Running             csi-snapshotter                          0                   d665ebdf82843       csi-hostpathplugin-qx76c                    kube-system
	9f6526183c819       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago       Running             gcp-auth                                 0                   19ad011d1176a       gcp-auth-78565c9fb4-cdmqg                   gcp-auth
	4af26279aa6f2       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago       Running             csi-provisioner                          0                   d665ebdf82843       csi-hostpathplugin-qx76c                    kube-system
	dbc7b2d7b48c2       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago       Running             liveness-probe                           0                   d665ebdf82843       csi-hostpathplugin-qx76c                    kube-system
	53d99b9c1fa5a       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago       Running             hostpath                                 0                   d665ebdf82843       csi-hostpathplugin-qx76c                    kube-system
	1159fff2343a5       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago       Running             node-driver-registrar                    0                   d665ebdf82843       csi-hostpathplugin-qx76c                    kube-system
	dd7b562cc0cf0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago       Running             gadget                                   0                   6a7cf144efa51       gadget-qqrhf                                gadget
	c93e1337cec3a       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago       Running             controller                               0                   fa5aa62abc378       ingress-nginx-controller-675c5ddd98-5qr44   ingress-nginx
	976c559427e02       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago       Running             registry-proxy                           0                   54115c16633c7       registry-proxy-4r8wk                        kube-system
	20da76bbf7724       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago       Exited              patch                                    0                   c0793ad0a80bc       ingress-nginx-admission-patch-s49m4         ingress-nginx
	4e8fe40f4a508       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago       Running             csi-external-health-monitor-controller   0                   d665ebdf82843       csi-hostpathplugin-qx76c                    kube-system
	3c758f6c5602f       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago       Running             metrics-server                           0                   e1476dbecedaf       metrics-server-85b7d694d7-qjfpt             kube-system
	c93fad6f2f681       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago       Running             nvidia-device-plugin-ctr                 0                   2262fecea932f       nvidia-device-plugin-daemonset-rl6ct        kube-system
	d66a0ce31c46f       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago       Running             registry                                 0                   43d6f94bd7955       registry-6b586f9694-cz995                   kube-system
	019ec1d7cee73       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago       Running             minikube-ingress-dns                     0                   dbc2f931b1b2b       kube-ingress-dns-minikube                   kube-system
	82514a9622aa2       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago       Running             local-path-provisioner                   0                   821c63b750b1b       local-path-provisioner-648f6765c9-n4zsd     local-path-storage
	ad7a2781a873f       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago       Running             volume-snapshot-controller               0                   0f0fb7349b5ee       snapshot-controller-7d9fbc56b8-slbnx        kube-system
	fc02e62488e86       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago       Running             yakd                                     0                   03be8b8b8c1a4       yakd-dashboard-5ff678cb9-wwfqw              yakd-dashboard
	80882ef14df04       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago       Running             csi-attacher                             0                   f345e92731737       csi-hostpath-attacher-0                     kube-system
	795c9019de222       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago       Running             volume-snapshot-controller               0                   fb2559abc3d3c       snapshot-controller-7d9fbc56b8-tpk9s        kube-system
	714974313acc5       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago       Running             cloud-spanner-emulator                   0                   f6cb2f3c02890       cloud-spanner-emulator-86bd5cbb97-6nxrn     default
	da425ec8726de       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   4 minutes ago       Exited              create                                   0                   b0dcfac982b64       ingress-nginx-admission-create-tcxc5        ingress-nginx
	1a89d3feb3cc1       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago       Running             csi-resizer                              0                   b1311b5a7abcc       csi-hostpath-resizer-0                      kube-system
	c1af9139ef29a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago       Running             storage-provisioner                      0                   fcba510a1debe       storage-provisioner                         kube-system
	c10333b42245b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago       Running             coredns                                  0                   9c546dccc3b1c       coredns-66bc5c9577-pmnfn                    kube-system
	0e8ae7e9978df       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago       Running             kube-proxy                               0                   0662f50b0bed2       kube-proxy-g2s4z                            kube-system
	1fbbdaf72898f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago       Running             kindnet-cni                              0                   5abad85dc189a       kindnet-hbjtx                               kube-system
	20700ce554fde       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago       Running             kube-scheduler                           0                   9192c2ce84035       kube-scheduler-addons-694780                kube-system
	ebc110500cd3d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago       Running             etcd                                     0                   3e50fb8e31c60       etcd-addons-694780                          kube-system
	4b12dbb529374       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago       Running             kube-controller-manager                  0                   0b2fd3ce2345b       kube-controller-manager-addons-694780       kube-system
	974f057716664       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago       Running             kube-apiserver                           0                   c3234c3fb92e1       kube-apiserver-addons-694780                kube-system
	
	
	==> coredns [c10333b42245b14943c5c33809857b909c2a03945bf30eedb9643814fdd3b23d] <==
	[INFO] 10.244.0.12:44018 - 26937 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002469565s
	[INFO] 10.244.0.12:44018 - 24715 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000225743s
	[INFO] 10.244.0.12:44018 - 38521 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00029849s
	[INFO] 10.244.0.12:40245 - 19278 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000176355s
	[INFO] 10.244.0.12:40245 - 19081 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000081913s
	[INFO] 10.244.0.12:39800 - 13430 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091349s
	[INFO] 10.244.0.12:39800 - 13217 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000076268s
	[INFO] 10.244.0.12:50239 - 42997 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084038s
	[INFO] 10.244.0.12:50239 - 42536 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000098077s
	[INFO] 10.244.0.12:39746 - 2216 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001504309s
	[INFO] 10.244.0.12:39746 - 2405 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001082682s
	[INFO] 10.244.0.12:37177 - 7739 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000099013s
	[INFO] 10.244.0.12:37177 - 7551 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000202826s
	[INFO] 10.244.0.21:43894 - 59324 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000187925s
	[INFO] 10.244.0.21:56054 - 40855 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000414898s
	[INFO] 10.244.0.21:60374 - 45139 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159124s
	[INFO] 10.244.0.21:41277 - 20955 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136199s
	[INFO] 10.244.0.21:54940 - 12643 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013487s
	[INFO] 10.244.0.21:33728 - 52877 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117318s
	[INFO] 10.244.0.21:39460 - 38870 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003109295s
	[INFO] 10.244.0.21:50334 - 52949 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003441271s
	[INFO] 10.244.0.21:40438 - 53530 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000597465s
	[INFO] 10.244.0.21:52860 - 34637 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002178419s
	[INFO] 10.244.0.23:43609 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000237156s
	[INFO] 10.244.0.23:42908 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000160175s
	
	
	==> describe nodes <==
	Name:               addons-694780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-694780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=addons-694780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_14_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-694780
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-694780"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:14:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-694780
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:19:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:19:43 +0000   Sun, 19 Oct 2025 12:14:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:19:43 +0000   Sun, 19 Oct 2025 12:14:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:19:43 +0000   Sun, 19 Oct 2025 12:14:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:19:43 +0000   Sun, 19 Oct 2025 12:15:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-694780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                899ba98e-c2fa-4cbf-97dc-320d6f52a440
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  default                     cloud-spanner-emulator-86bd5cbb97-6nxrn      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  default                     hello-world-app-5d498dc89-5bls4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-qqrhf                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  gcp-auth                    gcp-auth-78565c9fb4-cdmqg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-5qr44    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m57s
	  kube-system                 coredns-66bc5c9577-pmnfn                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m3s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 csi-hostpathplugin-qx76c                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 etcd-addons-694780                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m8s
	  kube-system                 kindnet-hbjtx                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m4s
	  kube-system                 kube-apiserver-addons-694780                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-controller-manager-addons-694780        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-proxy-g2s4z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-scheduler-addons-694780                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 metrics-server-85b7d694d7-qjfpt              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m58s
	  kube-system                 nvidia-device-plugin-daemonset-rl6ct         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 registry-6b586f9694-cz995                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 registry-creds-764b6fb674-c7zhl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 registry-proxy-4r8wk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 snapshot-controller-7d9fbc56b8-slbnx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 snapshot-controller-7d9fbc56b8-tpk9s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  local-path-storage          local-path-provisioner-648f6765c9-n4zsd      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-wwfqw               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m2s   kube-proxy       
	  Normal   Starting                 5m16s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m16s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m16s  kubelet          Node addons-694780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m16s  kubelet          Node addons-694780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m16s  kubelet          Node addons-694780 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m9s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m9s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m9s   kubelet          Node addons-694780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m9s   kubelet          Node addons-694780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m9s   kubelet          Node addons-694780 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m5s   node-controller  Node addons-694780 event: Registered Node addons-694780 in Controller
	  Normal   NodeReady                4m23s  kubelet          Node addons-694780 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct19 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015448] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.491491] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034667] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806219] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.239480] kauditd_printk_skb: 36 callbacks suppressed
	[Oct19 11:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct19 11:24] hrtimer: interrupt took 38365015 ns
	[Oct19 12:12] kauditd_printk_skb: 8 callbacks suppressed
	[Oct19 12:14] overlayfs: idmapped layers are currently not supported
	[  +0.068862] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [ebc110500cd3df83646f04053eb6ac2cb475cfd7069d77e04732e6c38ee16e85] <==
	{"level":"warn","ts":"2025-10-19T12:14:32.590818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.621116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.626972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.649200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.666009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.680035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.700249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.718435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.736017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.749280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.769367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.782600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.798647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.824188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.845712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.874218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.896560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.914420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:33.047574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:49.012469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:49.022065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:15:10.889081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:15:10.912301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:15:10.935975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:15:10.951892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34194","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [9f6526183c819720f10988ec9064a0726b712bb6b5f9110bc8603baa3f2fd7ed] <==
	2025/10/19 12:16:31 GCP Auth Webhook started!
	2025/10/19 12:16:45 Ready to marshal response ...
	2025/10/19 12:16:45 Ready to write response ...
	2025/10/19 12:16:45 Ready to marshal response ...
	2025/10/19 12:16:45 Ready to write response ...
	2025/10/19 12:16:45 Ready to marshal response ...
	2025/10/19 12:16:45 Ready to write response ...
	2025/10/19 12:17:05 Ready to marshal response ...
	2025/10/19 12:17:05 Ready to write response ...
	2025/10/19 12:17:19 Ready to marshal response ...
	2025/10/19 12:17:19 Ready to write response ...
	2025/10/19 12:17:21 Ready to marshal response ...
	2025/10/19 12:17:21 Ready to write response ...
	2025/10/19 12:17:34 Ready to marshal response ...
	2025/10/19 12:17:34 Ready to write response ...
	2025/10/19 12:17:45 Ready to marshal response ...
	2025/10/19 12:17:45 Ready to write response ...
	2025/10/19 12:17:45 Ready to marshal response ...
	2025/10/19 12:17:45 Ready to write response ...
	2025/10/19 12:17:54 Ready to marshal response ...
	2025/10/19 12:17:54 Ready to write response ...
	2025/10/19 12:19:43 Ready to marshal response ...
	2025/10/19 12:19:43 Ready to write response ...
	
	
	==> kernel <==
	 12:19:45 up  2:02,  0 user,  load average: 0.54, 2.10, 3.06
	Linux addons-694780 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1fbbdaf72898fb8d9d32b6836dde4d8c8bd3aeb32b5e40d0a08e758f67f5eeb9] <==
	I1019 12:17:42.420187       1 main.go:301] handling current node
	I1019 12:17:52.419914       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:17:52.419976       1 main.go:301] handling current node
	I1019 12:18:02.423273       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:18:02.423389       1 main.go:301] handling current node
	I1019 12:18:12.419907       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:18:12.419945       1 main.go:301] handling current node
	I1019 12:18:22.426084       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:18:22.426122       1 main.go:301] handling current node
	I1019 12:18:32.425027       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:18:32.425060       1 main.go:301] handling current node
	I1019 12:18:42.420306       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:18:42.420415       1 main.go:301] handling current node
	I1019 12:18:52.420290       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:18:52.420321       1 main.go:301] handling current node
	I1019 12:19:02.427684       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:19:02.427719       1 main.go:301] handling current node
	I1019 12:19:12.425846       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:19:12.425880       1 main.go:301] handling current node
	I1019 12:19:22.426336       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:19:22.426372       1 main.go:301] handling current node
	I1019 12:19:32.421887       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:19:32.421921       1 main.go:301] handling current node
	I1019 12:19:42.422210       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:19:42.422329       1 main.go:301] handling current node
	
	
	==> kube-apiserver [974f057716664d84b595f63044c6aaf6d840e979157a7453177950977adff06a] <==
	E1019 12:16:08.579895       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.12.41:443: connect: connection refused" logger="UnhandledError"
	E1019 12:16:08.582126       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.12.41:443: connect: connection refused" logger="UnhandledError"
	E1019 12:16:08.589115       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.12.41:443: connect: connection refused" logger="UnhandledError"
	E1019 12:16:08.610503       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.12.41:443: connect: connection refused" logger="UnhandledError"
	W1019 12:16:09.580235       1 handler_proxy.go:99] no RequestInfo found in the context
	E1019 12:16:09.580292       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1019 12:16:09.580310       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1019 12:16:09.580244       1 handler_proxy.go:99] no RequestInfo found in the context
	E1019 12:16:09.580385       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1019 12:16:09.581512       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1019 12:16:13.660491       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1019 12:16:13.660989       1 handler_proxy.go:99] no RequestInfo found in the context
	E1019 12:16:13.661034       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1019 12:16:13.709530       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1019 12:16:55.335569       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34552: use of closed network connection
	I1019 12:17:21.391020       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1019 12:17:21.905145       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.181.153"}
	I1019 12:17:31.522227       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1019 12:19:43.338336       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.158.13"}
	
	
	==> kube-controller-manager [4b12dbb5293748cac62f0aa74605c7890efe62f72b75cd8622373e2ae02a2e7a] <==
	I1019 12:14:40.904737       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 12:14:40.914453       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:14:40.916080       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 12:14:40.919132       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 12:14:40.919452       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 12:14:40.919904       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 12:14:40.919932       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 12:14:40.920244       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 12:14:40.920362       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 12:14:40.920450       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 12:14:40.920472       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 12:14:40.920507       1 shared_informer.go:356] "Caches are synced" controller="job"
	E1019 12:14:47.487546       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1019 12:15:10.880374       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1019 12:15:10.880533       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1019 12:15:10.880577       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1019 12:15:10.923873       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1019 12:15:10.927928       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1019 12:15:10.981320       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:15:11.029115       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:15:25.874683       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1019 12:15:40.990831       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1019 12:15:41.038488       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1019 12:16:10.996426       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1019 12:16:11.046584       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [0e8ae7e9978df10dd5c1ae839fb322082252d2948bb1e640b22d86f207cac350] <==
	I1019 12:14:42.631676       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:14:42.908366       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:14:43.010722       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:14:43.010756       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 12:14:43.010848       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:14:43.044835       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:14:43.044885       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:14:43.056942       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:14:43.058204       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:14:43.058231       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:14:43.070716       1 config.go:200] "Starting service config controller"
	I1019 12:14:43.070743       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:14:43.070762       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:14:43.070767       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:14:43.070794       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:14:43.070800       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:14:43.072800       1 config.go:309] "Starting node config controller"
	I1019 12:14:43.072825       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:14:43.072836       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:14:43.171439       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:14:43.171481       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 12:14:43.171522       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [20700ce554fdeeb461937fe8bd8c17a66655f95c7782ad23f8855f6fc85e921d] <==
	I1019 12:14:34.440163       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:14:34.442276       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:14:34.442381       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:14:34.442644       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 12:14:34.442754       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1019 12:14:34.453243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 12:14:34.453438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 12:14:34.453526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 12:14:34.453648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 12:14:34.453897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 12:14:34.453995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 12:14:34.454103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 12:14:34.454195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 12:14:34.454305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 12:14:34.454441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1019 12:14:34.454792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 12:14:34.454901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 12:14:34.455058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 12:14:34.455284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 12:14:34.455371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 12:14:34.455537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 12:14:34.457179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 12:14:34.457360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 12:14:34.457475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1019 12:14:35.643009       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:17:57 addons-694780 kubelet[1297]: I1019 12:17:57.225744    1297 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/ed24203b-ca42-4c5d-a7e5-8ac9ae8ea9c5-script\") on node \"addons-694780\" DevicePath \"\""
	Oct 19 12:17:58 addons-694780 kubelet[1297]: I1019 12:17:58.015582    1297 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f316e35014758f325ae12dd1806095016ca5007ae61a9103402df83dfda5a0e"
	Oct 19 12:17:58 addons-694780 kubelet[1297]: E1019 12:17:58.017274    1297 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-c2e9b24a-4b9e-48a1-a73a-ec392ca86059\" is forbidden: User \"system:node:addons-694780\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-694780' and this object" podUID="ed24203b-ca42-4c5d-a7e5-8ac9ae8ea9c5" pod="local-path-storage/helper-pod-delete-pvc-c2e9b24a-4b9e-48a1-a73a-ec392ca86059"
	Oct 19 12:17:58 addons-694780 kubelet[1297]: I1019 12:17:58.770145    1297 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed24203b-ca42-4c5d-a7e5-8ac9ae8ea9c5" path="/var/lib/kubelet/pods/ed24203b-ca42-4c5d-a7e5-8ac9ae8ea9c5/volumes"
	Oct 19 12:18:32 addons-694780 kubelet[1297]: I1019 12:18:32.768102    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-rl6ct" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:18:36 addons-694780 kubelet[1297]: I1019 12:18:36.860578    1297 scope.go:117] "RemoveContainer" containerID="1c416fd8522da5527e15d0b2e5e5087a5c5ab61e2e3dd9ab4f428ff5ffad4452"
	Oct 19 12:18:36 addons-694780 kubelet[1297]: I1019 12:18:36.872403    1297 scope.go:117] "RemoveContainer" containerID="0418a6b20e4ad0dd52c0bdc99f0b38b6b3286986960f10b9c0052bd8bb5ff8fe"
	Oct 19 12:18:36 addons-694780 kubelet[1297]: I1019 12:18:36.894051    1297 scope.go:117] "RemoveContainer" containerID="b88bd855f07d6d00fcc1c5721236a8f9c0c0332a7a0ac4e6ad15f291a047dcd2"
	Oct 19 12:18:36 addons-694780 kubelet[1297]: E1019 12:18:36.899810    1297 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ea2f2f9a46601069fae96616d786a2ca58b5574bc40edc616b1dbec9e6278f02/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ea2f2f9a46601069fae96616d786a2ca58b5574bc40edc616b1dbec9e6278f02/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/default_task-pv-pod-restore_3dab0d9e-002f-46f9-addf-b19790f33da4/task-pv-container/0.log" to get inode usage: stat /var/log/pods/default_task-pv-pod-restore_3dab0d9e-002f-46f9-addf-b19790f33da4/task-pv-container/0.log: no such file or directory
	Oct 19 12:18:36 addons-694780 kubelet[1297]: E1019 12:18:36.904459    1297 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: <nil>, extraDiskErr: could not stat "/var/log/pods/local-path-storage_helper-pod-create-pvc-c2e9b24a-4b9e-48a1-a73a-ec392ca86059_c1502f8f-a7f9-4809-8146-e0ed1a83d52d/helper-pod/0.log" to get inode usage: stat /var/log/pods/local-path-storage_helper-pod-create-pvc-c2e9b24a-4b9e-48a1-a73a-ec392ca86059_c1502f8f-a7f9-4809-8146-e0ed1a83d52d/helper-pod/0.log: no such file or directory
	Oct 19 12:18:38 addons-694780 kubelet[1297]: I1019 12:18:38.767208    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-cz995" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:19:11 addons-694780 kubelet[1297]: I1019 12:19:11.767573    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-4r8wk" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:19:33 addons-694780 kubelet[1297]: I1019 12:19:33.169878    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-c7zhl" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:19:35 addons-694780 kubelet[1297]: I1019 12:19:35.360420    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-c7zhl" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:19:35 addons-694780 kubelet[1297]: I1019 12:19:35.360481    1297 scope.go:117] "RemoveContainer" containerID="6bbfe6ec790c726ec963a065ddc41c3d780ada035006853ee184438846c47885"
	Oct 19 12:19:36 addons-694780 kubelet[1297]: I1019 12:19:36.367323    1297 scope.go:117] "RemoveContainer" containerID="6bbfe6ec790c726ec963a065ddc41c3d780ada035006853ee184438846c47885"
	Oct 19 12:19:36 addons-694780 kubelet[1297]: I1019 12:19:36.368388    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-c7zhl" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:19:36 addons-694780 kubelet[1297]: I1019 12:19:36.368542    1297 scope.go:117] "RemoveContainer" containerID="d7ea38ed1b5c794df46c7c64daece15e8b0d9c4f6d91ae67e3fbfb3d1b14fce9"
	Oct 19 12:19:36 addons-694780 kubelet[1297]: E1019 12:19:36.378918    1297 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-c7zhl_kube-system(13adddb6-d4bf-4eff-8eef-f96cbd11e787)\"" pod="kube-system/registry-creds-764b6fb674-c7zhl" podUID="13adddb6-d4bf-4eff-8eef-f96cbd11e787"
	Oct 19 12:19:37 addons-694780 kubelet[1297]: I1019 12:19:37.372590    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-c7zhl" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:19:37 addons-694780 kubelet[1297]: I1019 12:19:37.372653    1297 scope.go:117] "RemoveContainer" containerID="d7ea38ed1b5c794df46c7c64daece15e8b0d9c4f6d91ae67e3fbfb3d1b14fce9"
	Oct 19 12:19:37 addons-694780 kubelet[1297]: E1019 12:19:37.372817    1297 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-c7zhl_kube-system(13adddb6-d4bf-4eff-8eef-f96cbd11e787)\"" pod="kube-system/registry-creds-764b6fb674-c7zhl" podUID="13adddb6-d4bf-4eff-8eef-f96cbd11e787"
	Oct 19 12:19:43 addons-694780 kubelet[1297]: I1019 12:19:43.202495    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw4pf\" (UniqueName: \"kubernetes.io/projected/553d80d2-8177-410b-bf7b-4558b6423147-kube-api-access-hw4pf\") pod \"hello-world-app-5d498dc89-5bls4\" (UID: \"553d80d2-8177-410b-bf7b-4558b6423147\") " pod="default/hello-world-app-5d498dc89-5bls4"
	Oct 19 12:19:43 addons-694780 kubelet[1297]: I1019 12:19:43.203153    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/553d80d2-8177-410b-bf7b-4558b6423147-gcp-creds\") pod \"hello-world-app-5d498dc89-5bls4\" (UID: \"553d80d2-8177-410b-bf7b-4558b6423147\") " pod="default/hello-world-app-5d498dc89-5bls4"
	Oct 19 12:19:43 addons-694780 kubelet[1297]: W1019 12:19:43.494181    1297 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1204b177504834de2bad5ed03ffce4ec658a2a7b627e21eea9f07b8d50fe34f6/crio-f0e4b2a1c29e2b0c8aab575ff873e77bf799900ad0409e063906749465136c49 WatchSource:0}: Error finding container f0e4b2a1c29e2b0c8aab575ff873e77bf799900ad0409e063906749465136c49: Status 404 returned error can't find the container with id f0e4b2a1c29e2b0c8aab575ff873e77bf799900ad0409e063906749465136c49
	
	
	==> storage-provisioner [c1af9139ef29a1d92c70afefa4ebf2ccc782581c328281ec4e2f86b553c3c467] <==
	W1019 12:19:21.443722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:23.447347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:23.452003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:25.455090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:25.459688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:27.462428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:27.470488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:29.473429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:29.477932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:31.485905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:31.492274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:33.495743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:33.502012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:35.505327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:35.510044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:37.514109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:37.519190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:39.522103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:39.526824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:41.531016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:41.540565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:43.544266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:43.555685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:45.559849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:19:45.566003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-694780 -n addons-694780
helpers_test.go:269: (dbg) Run:  kubectl --context addons-694780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-tcxc5 ingress-nginx-admission-patch-s49m4
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-694780 describe pod ingress-nginx-admission-create-tcxc5 ingress-nginx-admission-patch-s49m4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-694780 describe pod ingress-nginx-admission-create-tcxc5 ingress-nginx-admission-patch-s49m4: exit status 1 (103.17661ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-tcxc5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-s49m4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-694780 describe pod ingress-nginx-admission-create-tcxc5 ingress-nginx-admission-patch-s49m4: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-694780 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (268.872659ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:19:46.902175  305000 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:19:46.903025  305000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:19:46.903067  305000 out.go:374] Setting ErrFile to fd 2...
	I1019 12:19:46.903090  305000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:19:46.903370  305000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:19:46.903728  305000 mustload.go:65] Loading cluster: addons-694780
	I1019 12:19:46.904131  305000 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:19:46.904176  305000 addons.go:606] checking whether the cluster is paused
	I1019 12:19:46.904308  305000 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:19:46.904349  305000 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:19:46.904917  305000 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:19:46.921813  305000 ssh_runner.go:195] Run: systemctl --version
	I1019 12:19:46.921869  305000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:19:46.938984  305000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:19:47.044442  305000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:19:47.044526  305000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:19:47.081368  305000 cri.go:89] found id: "d7ea38ed1b5c794df46c7c64daece15e8b0d9c4f6d91ae67e3fbfb3d1b14fce9"
	I1019 12:19:47.081390  305000 cri.go:89] found id: "babbcf90f6ac904ad2f1c25a59f3fae6037578ff6c985c97854cc8e67861c441"
	I1019 12:19:47.081395  305000 cri.go:89] found id: "4af26279aa6f2eea07df4d6dc9cb321abc82026e12a909ecae07903da4702995"
	I1019 12:19:47.081399  305000 cri.go:89] found id: "dbc7b2d7b48c237972a607237cc3947e59c7b5443b011c68106e9c392f0d975d"
	I1019 12:19:47.081402  305000 cri.go:89] found id: "53d99b9c1fa5a95c9e37f1a45f62cd51c1afdda905ffeb6b9248b13034343462"
	I1019 12:19:47.081406  305000 cri.go:89] found id: "1159fff2343a5c3e477dab117e1ff2e1a6416a99cdfb9f1705fbd592646d9832"
	I1019 12:19:47.081410  305000 cri.go:89] found id: "976c559427e0253107c6466d60d473a0039bdf7878194ad5bdaca6966253b26b"
	I1019 12:19:47.081413  305000 cri.go:89] found id: "4e8fe40f4a508cc1d1ac055b2b1bf2c19b1903cd3d5775fc32b7874ac809c0d8"
	I1019 12:19:47.081417  305000 cri.go:89] found id: "3c758f6c5602f3f9b9443ccc165652180df691ad854e4d71ce3f716ff6f9a39b"
	I1019 12:19:47.081424  305000 cri.go:89] found id: "c93fad6f2f68178d142a7ba603152834d1f7d544574f809291adbea8ae600e2a"
	I1019 12:19:47.081427  305000 cri.go:89] found id: "d66a0ce31c46f79abdc4cf890ad0bf9a061e0e382b03fc34c6d7bddbfe74e583"
	I1019 12:19:47.081431  305000 cri.go:89] found id: "019ec1d7cee73b30cbc0eb97d1a28afba7149627fff4e81ca5ad784b17e42ce6"
	I1019 12:19:47.081434  305000 cri.go:89] found id: "ad7a2781a873fe6c6ec31e43f52230ed09385cd447ef4cbd60561041e64afaaf"
	I1019 12:19:47.081438  305000 cri.go:89] found id: "80882ef14df043e6e23e23bb0ae867fdf8b865123d2ff32882a7c44cffea2388"
	I1019 12:19:47.081446  305000 cri.go:89] found id: "795c9019de22203a4870134a91ae2e2344e2a0d9a3c45ee2ca515e2465ef1af7"
	I1019 12:19:47.081458  305000 cri.go:89] found id: "1a89d3feb3cc173765597de5bc7c4a783544a76def605ad64c02aba17ef45ca3"
	I1019 12:19:47.081465  305000 cri.go:89] found id: "c1af9139ef29a1d92c70afefa4ebf2ccc782581c328281ec4e2f86b553c3c467"
	I1019 12:19:47.081470  305000 cri.go:89] found id: "c10333b42245b14943c5c33809857b909c2a03945bf30eedb9643814fdd3b23d"
	I1019 12:19:47.081473  305000 cri.go:89] found id: "0e8ae7e9978df10dd5c1ae839fb322082252d2948bb1e640b22d86f207cac350"
	I1019 12:19:47.081476  305000 cri.go:89] found id: "1fbbdaf72898fb8d9d32b6836dde4d8c8bd3aeb32b5e40d0a08e758f67f5eeb9"
	I1019 12:19:47.081480  305000 cri.go:89] found id: "20700ce554fdeeb461937fe8bd8c17a66655f95c7782ad23f8855f6fc85e921d"
	I1019 12:19:47.081483  305000 cri.go:89] found id: "ebc110500cd3df83646f04053eb6ac2cb475cfd7069d77e04732e6c38ee16e85"
	I1019 12:19:47.081486  305000 cri.go:89] found id: "4b12dbb5293748cac62f0aa74605c7890efe62f72b75cd8622373e2ae02a2e7a"
	I1019 12:19:47.081490  305000 cri.go:89] found id: "974f057716664d84b595f63044c6aaf6d840e979157a7453177950977adff06a"
	I1019 12:19:47.081493  305000 cri.go:89] found id: ""
	I1019 12:19:47.081544  305000 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:19:47.096434  305000 out.go:203] 
	W1019 12:19:47.099256  305000 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:19:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:19:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:19:47.099287  305000 out.go:285] * 
	* 
	W1019 12:19:47.105739  305000 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:19:47.108847  305000 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-694780 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-694780 addons disable ingress --alsologtostderr -v=1: exit status 11 (268.186159ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:19:47.174540  305042 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:19:47.175336  305042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:19:47.175379  305042 out.go:374] Setting ErrFile to fd 2...
	I1019 12:19:47.175399  305042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:19:47.175671  305042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:19:47.176003  305042 mustload.go:65] Loading cluster: addons-694780
	I1019 12:19:47.176399  305042 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:19:47.176440  305042 addons.go:606] checking whether the cluster is paused
	I1019 12:19:47.176565  305042 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:19:47.176607  305042 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:19:47.177087  305042 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:19:47.194664  305042 ssh_runner.go:195] Run: systemctl --version
	I1019 12:19:47.194723  305042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:19:47.212989  305042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:19:47.316567  305042 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:19:47.316655  305042 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:19:47.349859  305042 cri.go:89] found id: "d7ea38ed1b5c794df46c7c64daece15e8b0d9c4f6d91ae67e3fbfb3d1b14fce9"
	I1019 12:19:47.349882  305042 cri.go:89] found id: "babbcf90f6ac904ad2f1c25a59f3fae6037578ff6c985c97854cc8e67861c441"
	I1019 12:19:47.349886  305042 cri.go:89] found id: "4af26279aa6f2eea07df4d6dc9cb321abc82026e12a909ecae07903da4702995"
	I1019 12:19:47.349890  305042 cri.go:89] found id: "dbc7b2d7b48c237972a607237cc3947e59c7b5443b011c68106e9c392f0d975d"
	I1019 12:19:47.349893  305042 cri.go:89] found id: "53d99b9c1fa5a95c9e37f1a45f62cd51c1afdda905ffeb6b9248b13034343462"
	I1019 12:19:47.349897  305042 cri.go:89] found id: "1159fff2343a5c3e477dab117e1ff2e1a6416a99cdfb9f1705fbd592646d9832"
	I1019 12:19:47.349900  305042 cri.go:89] found id: "976c559427e0253107c6466d60d473a0039bdf7878194ad5bdaca6966253b26b"
	I1019 12:19:47.349903  305042 cri.go:89] found id: "4e8fe40f4a508cc1d1ac055b2b1bf2c19b1903cd3d5775fc32b7874ac809c0d8"
	I1019 12:19:47.349906  305042 cri.go:89] found id: "3c758f6c5602f3f9b9443ccc165652180df691ad854e4d71ce3f716ff6f9a39b"
	I1019 12:19:47.349912  305042 cri.go:89] found id: "c93fad6f2f68178d142a7ba603152834d1f7d544574f809291adbea8ae600e2a"
	I1019 12:19:47.349915  305042 cri.go:89] found id: "d66a0ce31c46f79abdc4cf890ad0bf9a061e0e382b03fc34c6d7bddbfe74e583"
	I1019 12:19:47.349918  305042 cri.go:89] found id: "019ec1d7cee73b30cbc0eb97d1a28afba7149627fff4e81ca5ad784b17e42ce6"
	I1019 12:19:47.349926  305042 cri.go:89] found id: "ad7a2781a873fe6c6ec31e43f52230ed09385cd447ef4cbd60561041e64afaaf"
	I1019 12:19:47.349930  305042 cri.go:89] found id: "80882ef14df043e6e23e23bb0ae867fdf8b865123d2ff32882a7c44cffea2388"
	I1019 12:19:47.349933  305042 cri.go:89] found id: "795c9019de22203a4870134a91ae2e2344e2a0d9a3c45ee2ca515e2465ef1af7"
	I1019 12:19:47.349938  305042 cri.go:89] found id: "1a89d3feb3cc173765597de5bc7c4a783544a76def605ad64c02aba17ef45ca3"
	I1019 12:19:47.349946  305042 cri.go:89] found id: "c1af9139ef29a1d92c70afefa4ebf2ccc782581c328281ec4e2f86b553c3c467"
	I1019 12:19:47.349950  305042 cri.go:89] found id: "c10333b42245b14943c5c33809857b909c2a03945bf30eedb9643814fdd3b23d"
	I1019 12:19:47.349953  305042 cri.go:89] found id: "0e8ae7e9978df10dd5c1ae839fb322082252d2948bb1e640b22d86f207cac350"
	I1019 12:19:47.349956  305042 cri.go:89] found id: "1fbbdaf72898fb8d9d32b6836dde4d8c8bd3aeb32b5e40d0a08e758f67f5eeb9"
	I1019 12:19:47.349961  305042 cri.go:89] found id: "20700ce554fdeeb461937fe8bd8c17a66655f95c7782ad23f8855f6fc85e921d"
	I1019 12:19:47.349967  305042 cri.go:89] found id: "ebc110500cd3df83646f04053eb6ac2cb475cfd7069d77e04732e6c38ee16e85"
	I1019 12:19:47.349971  305042 cri.go:89] found id: "4b12dbb5293748cac62f0aa74605c7890efe62f72b75cd8622373e2ae02a2e7a"
	I1019 12:19:47.349974  305042 cri.go:89] found id: "974f057716664d84b595f63044c6aaf6d840e979157a7453177950977adff06a"
	I1019 12:19:47.349978  305042 cri.go:89] found id: ""
	I1019 12:19:47.350033  305042 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:19:47.365880  305042 out.go:203] 
	W1019 12:19:47.368844  305042 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:19:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:19:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:19:47.368878  305042 out.go:285] * 
	* 
	W1019 12:19:47.375333  305042 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:19:47.378343  305042 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-694780 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.35s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-qqrhf" [76d3b271-7c45-46fb-b55b-5206b1d72e4d] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004599376s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-694780 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (341.76175ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:17:20.778613  302439 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:17:20.779266  302439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:17:20.779281  302439 out.go:374] Setting ErrFile to fd 2...
	I1019 12:17:20.779287  302439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:17:20.779565  302439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:17:20.779856  302439 mustload.go:65] Loading cluster: addons-694780
	I1019 12:17:20.786642  302439 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:17:20.786711  302439 addons.go:606] checking whether the cluster is paused
	I1019 12:17:20.786882  302439 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:17:20.786910  302439 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:17:20.789927  302439 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:17:20.810947  302439 ssh_runner.go:195] Run: systemctl --version
	I1019 12:17:20.811014  302439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:17:20.840313  302439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:17:20.948844  302439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:17:20.948939  302439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:17:20.992386  302439 cri.go:89] found id: "babbcf90f6ac904ad2f1c25a59f3fae6037578ff6c985c97854cc8e67861c441"
	I1019 12:17:20.992410  302439 cri.go:89] found id: "4af26279aa6f2eea07df4d6dc9cb321abc82026e12a909ecae07903da4702995"
	I1019 12:17:20.992415  302439 cri.go:89] found id: "dbc7b2d7b48c237972a607237cc3947e59c7b5443b011c68106e9c392f0d975d"
	I1019 12:17:20.992419  302439 cri.go:89] found id: "53d99b9c1fa5a95c9e37f1a45f62cd51c1afdda905ffeb6b9248b13034343462"
	I1019 12:17:20.992423  302439 cri.go:89] found id: "1159fff2343a5c3e477dab117e1ff2e1a6416a99cdfb9f1705fbd592646d9832"
	I1019 12:17:20.992426  302439 cri.go:89] found id: "976c559427e0253107c6466d60d473a0039bdf7878194ad5bdaca6966253b26b"
	I1019 12:17:20.992430  302439 cri.go:89] found id: "4e8fe40f4a508cc1d1ac055b2b1bf2c19b1903cd3d5775fc32b7874ac809c0d8"
	I1019 12:17:20.992433  302439 cri.go:89] found id: "3c758f6c5602f3f9b9443ccc165652180df691ad854e4d71ce3f716ff6f9a39b"
	I1019 12:17:20.992436  302439 cri.go:89] found id: "c93fad6f2f68178d142a7ba603152834d1f7d544574f809291adbea8ae600e2a"
	I1019 12:17:20.992445  302439 cri.go:89] found id: "d66a0ce31c46f79abdc4cf890ad0bf9a061e0e382b03fc34c6d7bddbfe74e583"
	I1019 12:17:20.992448  302439 cri.go:89] found id: "019ec1d7cee73b30cbc0eb97d1a28afba7149627fff4e81ca5ad784b17e42ce6"
	I1019 12:17:20.992452  302439 cri.go:89] found id: "ad7a2781a873fe6c6ec31e43f52230ed09385cd447ef4cbd60561041e64afaaf"
	I1019 12:17:20.992455  302439 cri.go:89] found id: "80882ef14df043e6e23e23bb0ae867fdf8b865123d2ff32882a7c44cffea2388"
	I1019 12:17:20.992458  302439 cri.go:89] found id: "795c9019de22203a4870134a91ae2e2344e2a0d9a3c45ee2ca515e2465ef1af7"
	I1019 12:17:20.992462  302439 cri.go:89] found id: "1a89d3feb3cc173765597de5bc7c4a783544a76def605ad64c02aba17ef45ca3"
	I1019 12:17:20.992468  302439 cri.go:89] found id: "c1af9139ef29a1d92c70afefa4ebf2ccc782581c328281ec4e2f86b553c3c467"
	I1019 12:17:20.992471  302439 cri.go:89] found id: "c10333b42245b14943c5c33809857b909c2a03945bf30eedb9643814fdd3b23d"
	I1019 12:17:20.992476  302439 cri.go:89] found id: "0e8ae7e9978df10dd5c1ae839fb322082252d2948bb1e640b22d86f207cac350"
	I1019 12:17:20.992479  302439 cri.go:89] found id: "1fbbdaf72898fb8d9d32b6836dde4d8c8bd3aeb32b5e40d0a08e758f67f5eeb9"
	I1019 12:17:20.992482  302439 cri.go:89] found id: "20700ce554fdeeb461937fe8bd8c17a66655f95c7782ad23f8855f6fc85e921d"
	I1019 12:17:20.992492  302439 cri.go:89] found id: "ebc110500cd3df83646f04053eb6ac2cb475cfd7069d77e04732e6c38ee16e85"
	I1019 12:17:20.992496  302439 cri.go:89] found id: "4b12dbb5293748cac62f0aa74605c7890efe62f72b75cd8622373e2ae02a2e7a"
	I1019 12:17:20.992506  302439 cri.go:89] found id: "974f057716664d84b595f63044c6aaf6d840e979157a7453177950977adff06a"
	I1019 12:17:20.992510  302439 cri.go:89] found id: ""
	I1019 12:17:20.992557  302439 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:17:21.011656  302439 out.go:203] 
	W1019 12:17:21.014630  302439 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:17:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:17:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:17:21.014661  302439 out.go:285] * 
	* 
	W1019 12:17:21.021247  302439 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:17:21.024306  302439 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-694780 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.35s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.593872ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-qjfpt" [5a14d2c0-b959-4c84-86d6-2921e765a741] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003833381s
addons_test.go:463: (dbg) Run:  kubectl --context addons-694780 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-694780 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (268.512615ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:17:15.461991  302305 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:17:15.462899  302305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:17:15.462919  302305 out.go:374] Setting ErrFile to fd 2...
	I1019 12:17:15.462924  302305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:17:15.463237  302305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:17:15.463571  302305 mustload.go:65] Loading cluster: addons-694780
	I1019 12:17:15.463981  302305 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:17:15.464002  302305 addons.go:606] checking whether the cluster is paused
	I1019 12:17:15.464142  302305 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:17:15.464176  302305 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:17:15.464683  302305 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:17:15.488874  302305 ssh_runner.go:195] Run: systemctl --version
	I1019 12:17:15.488933  302305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:17:15.506247  302305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:17:15.612349  302305 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:17:15.612472  302305 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:17:15.646352  302305 cri.go:89] found id: "babbcf90f6ac904ad2f1c25a59f3fae6037578ff6c985c97854cc8e67861c441"
	I1019 12:17:15.646378  302305 cri.go:89] found id: "4af26279aa6f2eea07df4d6dc9cb321abc82026e12a909ecae07903da4702995"
	I1019 12:17:15.646384  302305 cri.go:89] found id: "dbc7b2d7b48c237972a607237cc3947e59c7b5443b011c68106e9c392f0d975d"
	I1019 12:17:15.646388  302305 cri.go:89] found id: "53d99b9c1fa5a95c9e37f1a45f62cd51c1afdda905ffeb6b9248b13034343462"
	I1019 12:17:15.646391  302305 cri.go:89] found id: "1159fff2343a5c3e477dab117e1ff2e1a6416a99cdfb9f1705fbd592646d9832"
	I1019 12:17:15.646395  302305 cri.go:89] found id: "976c559427e0253107c6466d60d473a0039bdf7878194ad5bdaca6966253b26b"
	I1019 12:17:15.646399  302305 cri.go:89] found id: "4e8fe40f4a508cc1d1ac055b2b1bf2c19b1903cd3d5775fc32b7874ac809c0d8"
	I1019 12:17:15.646402  302305 cri.go:89] found id: "3c758f6c5602f3f9b9443ccc165652180df691ad854e4d71ce3f716ff6f9a39b"
	I1019 12:17:15.646406  302305 cri.go:89] found id: "c93fad6f2f68178d142a7ba603152834d1f7d544574f809291adbea8ae600e2a"
	I1019 12:17:15.646416  302305 cri.go:89] found id: "d66a0ce31c46f79abdc4cf890ad0bf9a061e0e382b03fc34c6d7bddbfe74e583"
	I1019 12:17:15.646419  302305 cri.go:89] found id: "019ec1d7cee73b30cbc0eb97d1a28afba7149627fff4e81ca5ad784b17e42ce6"
	I1019 12:17:15.646423  302305 cri.go:89] found id: "ad7a2781a873fe6c6ec31e43f52230ed09385cd447ef4cbd60561041e64afaaf"
	I1019 12:17:15.646426  302305 cri.go:89] found id: "80882ef14df043e6e23e23bb0ae867fdf8b865123d2ff32882a7c44cffea2388"
	I1019 12:17:15.646429  302305 cri.go:89] found id: "795c9019de22203a4870134a91ae2e2344e2a0d9a3c45ee2ca515e2465ef1af7"
	I1019 12:17:15.646433  302305 cri.go:89] found id: "1a89d3feb3cc173765597de5bc7c4a783544a76def605ad64c02aba17ef45ca3"
	I1019 12:17:15.646447  302305 cri.go:89] found id: "c1af9139ef29a1d92c70afefa4ebf2ccc782581c328281ec4e2f86b553c3c467"
	I1019 12:17:15.646453  302305 cri.go:89] found id: "c10333b42245b14943c5c33809857b909c2a03945bf30eedb9643814fdd3b23d"
	I1019 12:17:15.646458  302305 cri.go:89] found id: "0e8ae7e9978df10dd5c1ae839fb322082252d2948bb1e640b22d86f207cac350"
	I1019 12:17:15.646463  302305 cri.go:89] found id: "1fbbdaf72898fb8d9d32b6836dde4d8c8bd3aeb32b5e40d0a08e758f67f5eeb9"
	I1019 12:17:15.646466  302305 cri.go:89] found id: "20700ce554fdeeb461937fe8bd8c17a66655f95c7782ad23f8855f6fc85e921d"
	I1019 12:17:15.646471  302305 cri.go:89] found id: "ebc110500cd3df83646f04053eb6ac2cb475cfd7069d77e04732e6c38ee16e85"
	I1019 12:17:15.646484  302305 cri.go:89] found id: "4b12dbb5293748cac62f0aa74605c7890efe62f72b75cd8622373e2ae02a2e7a"
	I1019 12:17:15.646488  302305 cri.go:89] found id: "974f057716664d84b595f63044c6aaf6d840e979157a7453177950977adff06a"
	I1019 12:17:15.646491  302305 cri.go:89] found id: ""
	I1019 12:17:15.646560  302305 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:17:15.661935  302305 out.go:203] 
	W1019 12:17:15.664972  302305 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:17:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:17:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:17:15.665065  302305 out.go:285] * 
	* 
	W1019 12:17:15.671473  302305 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:17:15.674575  302305 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-694780 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.87s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1019 12:16:59.012995  294518 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1019 12:16:59.017126  294518 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1019 12:16:59.017157  294518 kapi.go:107] duration metric: took 4.176765ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.186882ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-694780 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-694780 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c97630ce-afaf-4e19-b93d-bf71b7fd3750] Pending
helpers_test.go:352: "task-pv-pod" [c97630ce-afaf-4e19-b93d-bf71b7fd3750] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [c97630ce-afaf-4e19-b93d-bf71b7fd3750] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003263051s
addons_test.go:572: (dbg) Run:  kubectl --context addons-694780 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-694780 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-694780 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-694780 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-694780 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-694780 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-694780 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [3dab0d9e-002f-46f9-addf-b19790f33da4] Pending
helpers_test.go:352: "task-pv-pod-restore" [3dab0d9e-002f-46f9-addf-b19790f33da4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [3dab0d9e-002f-46f9-addf-b19790f33da4] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003884095s
addons_test.go:614: (dbg) Run:  kubectl --context addons-694780 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-694780 delete pod task-pv-pod-restore: (1.068788076s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-694780 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-694780 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-694780 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (267.540724ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:17:44.406928  303245 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:17:44.407750  303245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:17:44.407770  303245 out.go:374] Setting ErrFile to fd 2...
	I1019 12:17:44.407776  303245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:17:44.408070  303245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:17:44.408385  303245 mustload.go:65] Loading cluster: addons-694780
	I1019 12:17:44.408747  303245 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:17:44.408764  303245 addons.go:606] checking whether the cluster is paused
	I1019 12:17:44.408865  303245 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:17:44.408884  303245 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:17:44.410564  303245 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:17:44.428596  303245 ssh_runner.go:195] Run: systemctl --version
	I1019 12:17:44.428670  303245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:17:44.446682  303245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:17:44.552260  303245 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:17:44.552342  303245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:17:44.582846  303245 cri.go:89] found id: "babbcf90f6ac904ad2f1c25a59f3fae6037578ff6c985c97854cc8e67861c441"
	I1019 12:17:44.582870  303245 cri.go:89] found id: "4af26279aa6f2eea07df4d6dc9cb321abc82026e12a909ecae07903da4702995"
	I1019 12:17:44.582879  303245 cri.go:89] found id: "dbc7b2d7b48c237972a607237cc3947e59c7b5443b011c68106e9c392f0d975d"
	I1019 12:17:44.582884  303245 cri.go:89] found id: "53d99b9c1fa5a95c9e37f1a45f62cd51c1afdda905ffeb6b9248b13034343462"
	I1019 12:17:44.582889  303245 cri.go:89] found id: "1159fff2343a5c3e477dab117e1ff2e1a6416a99cdfb9f1705fbd592646d9832"
	I1019 12:17:44.582893  303245 cri.go:89] found id: "976c559427e0253107c6466d60d473a0039bdf7878194ad5bdaca6966253b26b"
	I1019 12:17:44.582896  303245 cri.go:89] found id: "4e8fe40f4a508cc1d1ac055b2b1bf2c19b1903cd3d5775fc32b7874ac809c0d8"
	I1019 12:17:44.582899  303245 cri.go:89] found id: "3c758f6c5602f3f9b9443ccc165652180df691ad854e4d71ce3f716ff6f9a39b"
	I1019 12:17:44.582902  303245 cri.go:89] found id: "c93fad6f2f68178d142a7ba603152834d1f7d544574f809291adbea8ae600e2a"
	I1019 12:17:44.582908  303245 cri.go:89] found id: "d66a0ce31c46f79abdc4cf890ad0bf9a061e0e382b03fc34c6d7bddbfe74e583"
	I1019 12:17:44.582912  303245 cri.go:89] found id: "019ec1d7cee73b30cbc0eb97d1a28afba7149627fff4e81ca5ad784b17e42ce6"
	I1019 12:17:44.582915  303245 cri.go:89] found id: "ad7a2781a873fe6c6ec31e43f52230ed09385cd447ef4cbd60561041e64afaaf"
	I1019 12:17:44.582918  303245 cri.go:89] found id: "80882ef14df043e6e23e23bb0ae867fdf8b865123d2ff32882a7c44cffea2388"
	I1019 12:17:44.582921  303245 cri.go:89] found id: "795c9019de22203a4870134a91ae2e2344e2a0d9a3c45ee2ca515e2465ef1af7"
	I1019 12:17:44.582924  303245 cri.go:89] found id: "1a89d3feb3cc173765597de5bc7c4a783544a76def605ad64c02aba17ef45ca3"
	I1019 12:17:44.582930  303245 cri.go:89] found id: "c1af9139ef29a1d92c70afefa4ebf2ccc782581c328281ec4e2f86b553c3c467"
	I1019 12:17:44.582933  303245 cri.go:89] found id: "c10333b42245b14943c5c33809857b909c2a03945bf30eedb9643814fdd3b23d"
	I1019 12:17:44.582947  303245 cri.go:89] found id: "0e8ae7e9978df10dd5c1ae839fb322082252d2948bb1e640b22d86f207cac350"
	I1019 12:17:44.582951  303245 cri.go:89] found id: "1fbbdaf72898fb8d9d32b6836dde4d8c8bd3aeb32b5e40d0a08e758f67f5eeb9"
	I1019 12:17:44.582955  303245 cri.go:89] found id: "20700ce554fdeeb461937fe8bd8c17a66655f95c7782ad23f8855f6fc85e921d"
	I1019 12:17:44.582960  303245 cri.go:89] found id: "ebc110500cd3df83646f04053eb6ac2cb475cfd7069d77e04732e6c38ee16e85"
	I1019 12:17:44.582967  303245 cri.go:89] found id: "4b12dbb5293748cac62f0aa74605c7890efe62f72b75cd8622373e2ae02a2e7a"
	I1019 12:17:44.582970  303245 cri.go:89] found id: "974f057716664d84b595f63044c6aaf6d840e979157a7453177950977adff06a"
	I1019 12:17:44.582973  303245 cri.go:89] found id: ""
	I1019 12:17:44.583026  303245 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:17:44.597788  303245 out.go:203] 
	W1019 12:17:44.600708  303245 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:17:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:17:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:17:44.600731  303245 out.go:285] * 
	* 
	W1019 12:17:44.607223  303245 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:17:44.610174  303245 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-694780 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-694780 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (259.300476ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:17:44.670534  303288 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:17:44.671497  303288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:17:44.671511  303288 out.go:374] Setting ErrFile to fd 2...
	I1019 12:17:44.671516  303288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:17:44.671825  303288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:17:44.672156  303288 mustload.go:65] Loading cluster: addons-694780
	I1019 12:17:44.672573  303288 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:17:44.672590  303288 addons.go:606] checking whether the cluster is paused
	I1019 12:17:44.672725  303288 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:17:44.672743  303288 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:17:44.673230  303288 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:17:44.690497  303288 ssh_runner.go:195] Run: systemctl --version
	I1019 12:17:44.690564  303288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:17:44.707648  303288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:17:44.812420  303288 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:17:44.812497  303288 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:17:44.841515  303288 cri.go:89] found id: "babbcf90f6ac904ad2f1c25a59f3fae6037578ff6c985c97854cc8e67861c441"
	I1019 12:17:44.841535  303288 cri.go:89] found id: "4af26279aa6f2eea07df4d6dc9cb321abc82026e12a909ecae07903da4702995"
	I1019 12:17:44.841548  303288 cri.go:89] found id: "dbc7b2d7b48c237972a607237cc3947e59c7b5443b011c68106e9c392f0d975d"
	I1019 12:17:44.841553  303288 cri.go:89] found id: "53d99b9c1fa5a95c9e37f1a45f62cd51c1afdda905ffeb6b9248b13034343462"
	I1019 12:17:44.841557  303288 cri.go:89] found id: "1159fff2343a5c3e477dab117e1ff2e1a6416a99cdfb9f1705fbd592646d9832"
	I1019 12:17:44.841561  303288 cri.go:89] found id: "976c559427e0253107c6466d60d473a0039bdf7878194ad5bdaca6966253b26b"
	I1019 12:17:44.841564  303288 cri.go:89] found id: "4e8fe40f4a508cc1d1ac055b2b1bf2c19b1903cd3d5775fc32b7874ac809c0d8"
	I1019 12:17:44.841567  303288 cri.go:89] found id: "3c758f6c5602f3f9b9443ccc165652180df691ad854e4d71ce3f716ff6f9a39b"
	I1019 12:17:44.841570  303288 cri.go:89] found id: "c93fad6f2f68178d142a7ba603152834d1f7d544574f809291adbea8ae600e2a"
	I1019 12:17:44.841576  303288 cri.go:89] found id: "d66a0ce31c46f79abdc4cf890ad0bf9a061e0e382b03fc34c6d7bddbfe74e583"
	I1019 12:17:44.841582  303288 cri.go:89] found id: "019ec1d7cee73b30cbc0eb97d1a28afba7149627fff4e81ca5ad784b17e42ce6"
	I1019 12:17:44.841585  303288 cri.go:89] found id: "ad7a2781a873fe6c6ec31e43f52230ed09385cd447ef4cbd60561041e64afaaf"
	I1019 12:17:44.841589  303288 cri.go:89] found id: "80882ef14df043e6e23e23bb0ae867fdf8b865123d2ff32882a7c44cffea2388"
	I1019 12:17:44.841603  303288 cri.go:89] found id: "795c9019de22203a4870134a91ae2e2344e2a0d9a3c45ee2ca515e2465ef1af7"
	I1019 12:17:44.841607  303288 cri.go:89] found id: "1a89d3feb3cc173765597de5bc7c4a783544a76def605ad64c02aba17ef45ca3"
	I1019 12:17:44.841611  303288 cri.go:89] found id: "c1af9139ef29a1d92c70afefa4ebf2ccc782581c328281ec4e2f86b553c3c467"
	I1019 12:17:44.841614  303288 cri.go:89] found id: "c10333b42245b14943c5c33809857b909c2a03945bf30eedb9643814fdd3b23d"
	I1019 12:17:44.841618  303288 cri.go:89] found id: "0e8ae7e9978df10dd5c1ae839fb322082252d2948bb1e640b22d86f207cac350"
	I1019 12:17:44.841621  303288 cri.go:89] found id: "1fbbdaf72898fb8d9d32b6836dde4d8c8bd3aeb32b5e40d0a08e758f67f5eeb9"
	I1019 12:17:44.841624  303288 cri.go:89] found id: "20700ce554fdeeb461937fe8bd8c17a66655f95c7782ad23f8855f6fc85e921d"
	I1019 12:17:44.841629  303288 cri.go:89] found id: "ebc110500cd3df83646f04053eb6ac2cb475cfd7069d77e04732e6c38ee16e85"
	I1019 12:17:44.841632  303288 cri.go:89] found id: "4b12dbb5293748cac62f0aa74605c7890efe62f72b75cd8622373e2ae02a2e7a"
	I1019 12:17:44.841640  303288 cri.go:89] found id: "974f057716664d84b595f63044c6aaf6d840e979157a7453177950977adff06a"
	I1019 12:17:44.841643  303288 cri.go:89] found id: ""
	I1019 12:17:44.841738  303288 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:17:44.857402  303288 out.go:203] 
	W1019 12:17:44.860398  303288 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:17:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:17:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:17:44.860425  303288 out.go:285] * 
	* 
	W1019 12:17:44.867572  303288 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:17:44.870678  303288 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-694780 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (45.87s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-694780 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-694780 --alsologtostderr -v=1: exit status 11 (264.742505ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:16:55.783968  301489 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:16:55.784736  301489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:16:55.784749  301489 out.go:374] Setting ErrFile to fd 2...
	I1019 12:16:55.784754  301489 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:16:55.785013  301489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:16:55.785302  301489 mustload.go:65] Loading cluster: addons-694780
	I1019 12:16:55.785659  301489 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:16:55.785713  301489 addons.go:606] checking whether the cluster is paused
	I1019 12:16:55.785856  301489 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:16:55.785883  301489 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:16:55.786345  301489 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:16:55.803207  301489 ssh_runner.go:195] Run: systemctl --version
	I1019 12:16:55.803279  301489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:16:55.820890  301489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:16:55.928375  301489 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:16:55.928465  301489 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:16:55.959257  301489 cri.go:89] found id: "babbcf90f6ac904ad2f1c25a59f3fae6037578ff6c985c97854cc8e67861c441"
	I1019 12:16:55.959281  301489 cri.go:89] found id: "4af26279aa6f2eea07df4d6dc9cb321abc82026e12a909ecae07903da4702995"
	I1019 12:16:55.959295  301489 cri.go:89] found id: "dbc7b2d7b48c237972a607237cc3947e59c7b5443b011c68106e9c392f0d975d"
	I1019 12:16:55.959299  301489 cri.go:89] found id: "53d99b9c1fa5a95c9e37f1a45f62cd51c1afdda905ffeb6b9248b13034343462"
	I1019 12:16:55.959302  301489 cri.go:89] found id: "1159fff2343a5c3e477dab117e1ff2e1a6416a99cdfb9f1705fbd592646d9832"
	I1019 12:16:55.959306  301489 cri.go:89] found id: "976c559427e0253107c6466d60d473a0039bdf7878194ad5bdaca6966253b26b"
	I1019 12:16:55.959309  301489 cri.go:89] found id: "4e8fe40f4a508cc1d1ac055b2b1bf2c19b1903cd3d5775fc32b7874ac809c0d8"
	I1019 12:16:55.959337  301489 cri.go:89] found id: "3c758f6c5602f3f9b9443ccc165652180df691ad854e4d71ce3f716ff6f9a39b"
	I1019 12:16:55.959347  301489 cri.go:89] found id: "c93fad6f2f68178d142a7ba603152834d1f7d544574f809291adbea8ae600e2a"
	I1019 12:16:55.959354  301489 cri.go:89] found id: "d66a0ce31c46f79abdc4cf890ad0bf9a061e0e382b03fc34c6d7bddbfe74e583"
	I1019 12:16:55.959358  301489 cri.go:89] found id: "019ec1d7cee73b30cbc0eb97d1a28afba7149627fff4e81ca5ad784b17e42ce6"
	I1019 12:16:55.959361  301489 cri.go:89] found id: "ad7a2781a873fe6c6ec31e43f52230ed09385cd447ef4cbd60561041e64afaaf"
	I1019 12:16:55.959364  301489 cri.go:89] found id: "80882ef14df043e6e23e23bb0ae867fdf8b865123d2ff32882a7c44cffea2388"
	I1019 12:16:55.959367  301489 cri.go:89] found id: "795c9019de22203a4870134a91ae2e2344e2a0d9a3c45ee2ca515e2465ef1af7"
	I1019 12:16:55.959371  301489 cri.go:89] found id: "1a89d3feb3cc173765597de5bc7c4a783544a76def605ad64c02aba17ef45ca3"
	I1019 12:16:55.959376  301489 cri.go:89] found id: "c1af9139ef29a1d92c70afefa4ebf2ccc782581c328281ec4e2f86b553c3c467"
	I1019 12:16:55.959383  301489 cri.go:89] found id: "c10333b42245b14943c5c33809857b909c2a03945bf30eedb9643814fdd3b23d"
	I1019 12:16:55.959387  301489 cri.go:89] found id: "0e8ae7e9978df10dd5c1ae839fb322082252d2948bb1e640b22d86f207cac350"
	I1019 12:16:55.959390  301489 cri.go:89] found id: "1fbbdaf72898fb8d9d32b6836dde4d8c8bd3aeb32b5e40d0a08e758f67f5eeb9"
	I1019 12:16:55.959406  301489 cri.go:89] found id: "20700ce554fdeeb461937fe8bd8c17a66655f95c7782ad23f8855f6fc85e921d"
	I1019 12:16:55.959415  301489 cri.go:89] found id: "ebc110500cd3df83646f04053eb6ac2cb475cfd7069d77e04732e6c38ee16e85"
	I1019 12:16:55.959423  301489 cri.go:89] found id: "4b12dbb5293748cac62f0aa74605c7890efe62f72b75cd8622373e2ae02a2e7a"
	I1019 12:16:55.959427  301489 cri.go:89] found id: "974f057716664d84b595f63044c6aaf6d840e979157a7453177950977adff06a"
	I1019 12:16:55.959430  301489 cri.go:89] found id: ""
	I1019 12:16:55.959497  301489 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:16:55.974309  301489 out.go:203] 
	W1019 12:16:55.977230  301489 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:16:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:16:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:16:55.977257  301489 out.go:285] * 
	* 
	W1019 12:16:55.983682  301489 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:16:55.986537  301489 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-694780 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-694780
helpers_test.go:243: (dbg) docker inspect addons-694780:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1204b177504834de2bad5ed03ffce4ec658a2a7b627e21eea9f07b8d50fe34f6",
	        "Created": "2025-10-19T12:14:08.1789404Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295674,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:14:08.236356286Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/1204b177504834de2bad5ed03ffce4ec658a2a7b627e21eea9f07b8d50fe34f6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1204b177504834de2bad5ed03ffce4ec658a2a7b627e21eea9f07b8d50fe34f6/hostname",
	        "HostsPath": "/var/lib/docker/containers/1204b177504834de2bad5ed03ffce4ec658a2a7b627e21eea9f07b8d50fe34f6/hosts",
	        "LogPath": "/var/lib/docker/containers/1204b177504834de2bad5ed03ffce4ec658a2a7b627e21eea9f07b8d50fe34f6/1204b177504834de2bad5ed03ffce4ec658a2a7b627e21eea9f07b8d50fe34f6-json.log",
	        "Name": "/addons-694780",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-694780:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-694780",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1204b177504834de2bad5ed03ffce4ec658a2a7b627e21eea9f07b8d50fe34f6",
	                "LowerDir": "/var/lib/docker/overlay2/24b4d74c051b53eb5a98090b6fae5882d58acd7c302d8ac3ca9c1204895981b4-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/24b4d74c051b53eb5a98090b6fae5882d58acd7c302d8ac3ca9c1204895981b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/24b4d74c051b53eb5a98090b6fae5882d58acd7c302d8ac3ca9c1204895981b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/24b4d74c051b53eb5a98090b6fae5882d58acd7c302d8ac3ca9c1204895981b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-694780",
	                "Source": "/var/lib/docker/volumes/addons-694780/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-694780",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-694780",
	                "name.minikube.sigs.k8s.io": "addons-694780",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "25b961a5947230fb374b7ba5aa98853a7d9052cf5fbe149e8a1cb968e89f5d03",
	            "SandboxKey": "/var/run/docker/netns/25b961a59472",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-694780": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:6c:a2:a2:7e:bc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e72e5be0d8e39e54cd93f8c6194d3277252a7c979ea76a31ac8ec3c9e23e57fe",
	                    "EndpointID": "f500bb2a92c27f024cf66fb0bebe85c183d7984b6851977c8ffe1150fba4b24e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-694780",
	                        "1204b1775048"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-694780 -n addons-694780
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-694780 logs -n 25: (1.578315281s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-865961 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-865961   │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │ 19 Oct 25 12:13 UTC │
	│ delete  │ -p download-only-865961                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-865961   │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │ 19 Oct 25 12:13 UTC │
	│ start   │ -o=json --download-only -p download-only-900450 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-900450   │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │ 19 Oct 25 12:13 UTC │
	│ delete  │ -p download-only-900450                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-900450   │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │ 19 Oct 25 12:13 UTC │
	│ delete  │ -p download-only-865961                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-865961   │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │ 19 Oct 25 12:13 UTC │
	│ delete  │ -p download-only-900450                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-900450   │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │ 19 Oct 25 12:13 UTC │
	│ start   │ --download-only -p download-docker-107639 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-107639 │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │                     │
	│ delete  │ -p download-docker-107639                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-107639 │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │ 19 Oct 25 12:13 UTC │
	│ start   │ --download-only -p binary-mirror-974688 --alsologtostderr --binary-mirror http://127.0.0.1:41571 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-974688   │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │                     │
	│ delete  │ -p binary-mirror-974688                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-974688   │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │ 19 Oct 25 12:13 UTC │
	│ addons  │ enable dashboard -p addons-694780                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │                     │
	│ addons  │ disable dashboard -p addons-694780                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │                     │
	│ start   │ -p addons-694780 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │ 19 Oct 25 12:16 UTC │
	│ addons  │ addons-694780 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:16 UTC │                     │
	│ addons  │ addons-694780 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-694780 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-694780          │ jenkins │ v1.37.0 │ 19 Oct 25 12:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:13:42
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:13:42.003152  295274 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:13:42.003315  295274 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:13:42.003352  295274 out.go:374] Setting ErrFile to fd 2...
	I1019 12:13:42.003359  295274 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:13:42.003730  295274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:13:42.004387  295274 out.go:368] Setting JSON to false
	I1019 12:13:42.005390  295274 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6972,"bootTime":1760869050,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 12:13:42.005488  295274 start.go:141] virtualization:  
	I1019 12:13:42.009048  295274 out.go:179] * [addons-694780] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 12:13:42.013259  295274 notify.go:220] Checking for updates...
	I1019 12:13:42.016485  295274 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:13:42.019473  295274 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:13:42.022577  295274 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 12:13:42.025710  295274 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 12:13:42.028732  295274 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 12:13:42.031894  295274 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:13:42.035161  295274 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:13:42.069927  295274 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 12:13:42.070076  295274 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:13:42.147765  295274 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-19 12:13:42.137429844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 12:13:42.147899  295274 docker.go:318] overlay module found
	I1019 12:13:42.151135  295274 out.go:179] * Using the docker driver based on user configuration
	I1019 12:13:42.154130  295274 start.go:305] selected driver: docker
	I1019 12:13:42.154181  295274 start.go:925] validating driver "docker" against <nil>
	I1019 12:13:42.154203  295274 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:13:42.155033  295274 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:13:42.220415  295274 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-19 12:13:42.20881311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 12:13:42.220594  295274 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 12:13:42.220831  295274 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:13:42.223828  295274 out.go:179] * Using Docker driver with root privileges
	I1019 12:13:42.226742  295274 cni.go:84] Creating CNI manager for ""
	I1019 12:13:42.226821  295274 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:13:42.226834  295274 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 12:13:42.226927  295274 start.go:349] cluster config:
	{Name:addons-694780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-694780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1019 12:13:42.230316  295274 out.go:179] * Starting "addons-694780" primary control-plane node in "addons-694780" cluster
	I1019 12:13:42.233216  295274 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:13:42.236285  295274 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:13:42.239184  295274 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:13:42.239247  295274 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 12:13:42.239281  295274 cache.go:58] Caching tarball of preloaded images
	I1019 12:13:42.239271  295274 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:13:42.239413  295274 preload.go:233] Found /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 12:13:42.239426  295274 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:13:42.239820  295274 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/config.json ...
	I1019 12:13:42.239855  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/config.json: {Name:mk4d2d5e0873fa20b844f128ceba5b32c5ea6045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:13:42.257356  295274 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1019 12:13:42.257521  295274 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1019 12:13:42.257541  295274 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1019 12:13:42.257558  295274 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1019 12:13:42.257566  295274 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1019 12:13:42.257571  295274 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1019 12:14:00.396934  295274 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1019 12:14:00.396972  295274 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:14:00.397005  295274 start.go:360] acquireMachinesLock for addons-694780: {Name:mk35cb5f0a4d472e9c073f15331d1036d68f1f63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:14:00.397160  295274 start.go:364] duration metric: took 134.985µs to acquireMachinesLock for "addons-694780"
	I1019 12:14:00.397192  295274 start.go:93] Provisioning new machine with config: &{Name:addons-694780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-694780 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:14:00.397300  295274 start.go:125] createHost starting for "" (driver="docker")
	I1019 12:14:00.400926  295274 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1019 12:14:00.401206  295274 start.go:159] libmachine.API.Create for "addons-694780" (driver="docker")
	I1019 12:14:00.401264  295274 client.go:168] LocalClient.Create starting
	I1019 12:14:00.401425  295274 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem
	I1019 12:14:00.608971  295274 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem
	I1019 12:14:01.439067  295274 cli_runner.go:164] Run: docker network inspect addons-694780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 12:14:01.454592  295274 cli_runner.go:211] docker network inspect addons-694780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 12:14:01.454696  295274 network_create.go:284] running [docker network inspect addons-694780] to gather additional debugging logs...
	I1019 12:14:01.454720  295274 cli_runner.go:164] Run: docker network inspect addons-694780
	W1019 12:14:01.469995  295274 cli_runner.go:211] docker network inspect addons-694780 returned with exit code 1
	I1019 12:14:01.470025  295274 network_create.go:287] error running [docker network inspect addons-694780]: docker network inspect addons-694780: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-694780 not found
	I1019 12:14:01.470040  295274 network_create.go:289] output of [docker network inspect addons-694780]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-694780 not found
	
	** /stderr **
	I1019 12:14:01.470150  295274 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:14:01.487388  295274 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c46050}
	I1019 12:14:01.487434  295274 network_create.go:124] attempt to create docker network addons-694780 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1019 12:14:01.487489  295274 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-694780 addons-694780
	I1019 12:14:01.547881  295274 network_create.go:108] docker network addons-694780 192.168.49.0/24 created
	I1019 12:14:01.547910  295274 kic.go:121] calculated static IP "192.168.49.2" for the "addons-694780" container
	I1019 12:14:01.547991  295274 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 12:14:01.563837  295274 cli_runner.go:164] Run: docker volume create addons-694780 --label name.minikube.sigs.k8s.io=addons-694780 --label created_by.minikube.sigs.k8s.io=true
	I1019 12:14:01.584920  295274 oci.go:103] Successfully created a docker volume addons-694780
	I1019 12:14:01.585017  295274 cli_runner.go:164] Run: docker run --rm --name addons-694780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-694780 --entrypoint /usr/bin/test -v addons-694780:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 12:14:03.689636  295274 cli_runner.go:217] Completed: docker run --rm --name addons-694780-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-694780 --entrypoint /usr/bin/test -v addons-694780:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (2.10457888s)
	I1019 12:14:03.689667  295274 oci.go:107] Successfully prepared a docker volume addons-694780
	I1019 12:14:03.689727  295274 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:14:03.689775  295274 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 12:14:03.689863  295274 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-694780:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 12:14:08.112616  295274 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-694780:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.422712279s)
	I1019 12:14:08.112651  295274 kic.go:203] duration metric: took 4.422884967s to extract preloaded images to volume ...
	W1019 12:14:08.112812  295274 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 12:14:08.112922  295274 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 12:14:08.164188  295274 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-694780 --name addons-694780 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-694780 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-694780 --network addons-694780 --ip 192.168.49.2 --volume addons-694780:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 12:14:08.447305  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Running}}
	I1019 12:14:08.467652  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:08.489554  295274 cli_runner.go:164] Run: docker exec addons-694780 stat /var/lib/dpkg/alternatives/iptables
	I1019 12:14:08.539040  295274 oci.go:144] the created container "addons-694780" has a running status.
	I1019 12:14:08.539067  295274 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa...
	I1019 12:14:08.959464  295274 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 12:14:08.999733  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:09.019149  295274 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 12:14:09.019170  295274 kic_runner.go:114] Args: [docker exec --privileged addons-694780 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 12:14:09.060907  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:09.079146  295274 machine.go:93] provisionDockerMachine start ...
	I1019 12:14:09.079247  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:09.096513  295274 main.go:141] libmachine: Using SSH client type: native
	I1019 12:14:09.096856  295274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1019 12:14:09.096865  295274 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:14:09.097531  295274 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1019 12:14:12.244930  295274 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-694780
	
	I1019 12:14:12.244955  295274 ubuntu.go:182] provisioning hostname "addons-694780"
	I1019 12:14:12.245016  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:12.262129  295274 main.go:141] libmachine: Using SSH client type: native
	I1019 12:14:12.262441  295274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1019 12:14:12.262458  295274 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-694780 && echo "addons-694780" | sudo tee /etc/hostname
	I1019 12:14:12.418418  295274 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-694780
	
	I1019 12:14:12.418524  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:12.435143  295274 main.go:141] libmachine: Using SSH client type: native
	I1019 12:14:12.435470  295274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1019 12:14:12.435493  295274 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-694780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-694780/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-694780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:14:12.581596  295274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:14:12.581624  295274 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 12:14:12.581652  295274 ubuntu.go:190] setting up certificates
	I1019 12:14:12.581663  295274 provision.go:84] configureAuth start
	I1019 12:14:12.581740  295274 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-694780
	I1019 12:14:12.602074  295274 provision.go:143] copyHostCerts
	I1019 12:14:12.602167  295274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 12:14:12.602325  295274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 12:14:12.602387  295274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 12:14:12.602436  295274 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.addons-694780 san=[127.0.0.1 192.168.49.2 addons-694780 localhost minikube]
	I1019 12:14:12.862682  295274 provision.go:177] copyRemoteCerts
	I1019 12:14:12.862749  295274 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:14:12.862789  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:12.882034  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:12.985295  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:14:13.003021  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1019 12:14:13.021703  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 12:14:13.039512  295274 provision.go:87] duration metric: took 457.823307ms to configureAuth
	I1019 12:14:13.039538  295274 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:14:13.039731  295274 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:14:13.039842  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:13.056705  295274 main.go:141] libmachine: Using SSH client type: native
	I1019 12:14:13.057015  295274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1019 12:14:13.057035  295274 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:14:13.309490  295274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:14:13.309510  295274 machine.go:96] duration metric: took 4.230345285s to provisionDockerMachine
	I1019 12:14:13.309520  295274 client.go:171] duration metric: took 12.908242335s to LocalClient.Create
	I1019 12:14:13.309533  295274 start.go:167] duration metric: took 12.908329171s to libmachine.API.Create "addons-694780"
	I1019 12:14:13.309540  295274 start.go:293] postStartSetup for "addons-694780" (driver="docker")
	I1019 12:14:13.309550  295274 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:14:13.309614  295274 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:14:13.309668  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:13.327621  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:13.433731  295274 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:14:13.437088  295274 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:14:13.437153  295274 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:14:13.437171  295274 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/addons for local assets ...
	I1019 12:14:13.437254  295274 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/files for local assets ...
	I1019 12:14:13.437281  295274 start.go:296] duration metric: took 127.735602ms for postStartSetup
	I1019 12:14:13.437606  295274 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-694780
	I1019 12:14:13.453886  295274 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/config.json ...
	I1019 12:14:13.454182  295274 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:14:13.454232  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:13.470698  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:13.570435  295274 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:14:13.574907  295274 start.go:128] duration metric: took 13.177582447s to createHost
	I1019 12:14:13.574933  295274 start.go:83] releasing machines lock for "addons-694780", held for 13.177762528s
	I1019 12:14:13.575004  295274 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-694780
	I1019 12:14:13.591947  295274 ssh_runner.go:195] Run: cat /version.json
	I1019 12:14:13.592006  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:13.592275  295274 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:14:13.592343  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:13.610194  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:13.613784  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:13.802843  295274 ssh_runner.go:195] Run: systemctl --version
	I1019 12:14:13.809147  295274 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:14:13.845006  295274 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:14:13.849342  295274 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:14:13.849414  295274 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:14:13.878459  295274 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 12:14:13.878485  295274 start.go:495] detecting cgroup driver to use...
	I1019 12:14:13.878546  295274 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 12:14:13.878611  295274 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:14:13.895191  295274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:14:13.907468  295274 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:14:13.907535  295274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:14:13.924986  295274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:14:13.943207  295274 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:14:14.060735  295274 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:14:14.190231  295274 docker.go:234] disabling docker service ...
	I1019 12:14:14.190303  295274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:14:14.210252  295274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:14:14.223688  295274 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:14:14.341737  295274 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:14:14.465040  295274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:14:14.476952  295274 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:14:14.491108  295274 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:14:14.491194  295274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:14:14.499208  295274 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 12:14:14.499300  295274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:14:14.507935  295274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:14:14.516399  295274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:14:14.524574  295274 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:14:14.532356  295274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:14:14.540789  295274 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:14:14.554659  295274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:14:14.563160  295274 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:14:14.570601  295274 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:14:14.578071  295274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:14:14.696899  295274 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:14:14.818827  295274 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:14:14.818914  295274 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:14:14.822875  295274 start.go:563] Will wait 60s for crictl version
	I1019 12:14:14.822937  295274 ssh_runner.go:195] Run: which crictl
	I1019 12:14:14.826449  295274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:14:14.855395  295274 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:14:14.855493  295274 ssh_runner.go:195] Run: crio --version
	I1019 12:14:14.885643  295274 ssh_runner.go:195] Run: crio --version
	I1019 12:14:14.918034  295274 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:14:14.920933  295274 cli_runner.go:164] Run: docker network inspect addons-694780 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:14:14.937811  295274 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1019 12:14:14.941603  295274 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:14:14.952026  295274 kubeadm.go:883] updating cluster {Name:addons-694780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-694780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:14:14.952163  295274 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:14:14.952230  295274 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:14:14.985307  295274 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:14:14.985331  295274 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:14:14.985386  295274 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:14:15.021251  295274 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:14:15.021277  295274 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:14:15.021285  295274 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1019 12:14:15.021388  295274 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-694780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-694780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:14:15.021481  295274 ssh_runner.go:195] Run: crio config
	I1019 12:14:15.102018  295274 cni.go:84] Creating CNI manager for ""
	I1019 12:14:15.102042  295274 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:14:15.102070  295274 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:14:15.102096  295274 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-694780 NodeName:addons-694780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:14:15.102227  295274 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-694780"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:14:15.102309  295274 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:14:15.111146  295274 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:14:15.111238  295274 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:14:15.119726  295274 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1019 12:14:15.133417  295274 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:14:15.147105  295274 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1019 12:14:15.160654  295274 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:14:15.164403  295274 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:14:15.174825  295274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:14:15.290298  295274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:14:15.305425  295274 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780 for IP: 192.168.49.2
	I1019 12:14:15.305496  295274 certs.go:195] generating shared ca certs ...
	I1019 12:14:15.305525  295274 certs.go:227] acquiring lock for ca certs: {Name:mk8f2f1c683cf5104ef70f6f3d59bf8f6240d633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:15.305710  295274 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key
	I1019 12:14:15.699481  295274 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt ...
	I1019 12:14:15.699513  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt: {Name:mkbdb340720a23421771727d8d82cd155586a3a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:15.699711  295274 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key ...
	I1019 12:14:15.699725  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key: {Name:mkb0f5ea7800903ee705f0d24dab1dda42de7cf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:15.700470  295274 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key
	I1019 12:14:16.978428  295274 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt ...
	I1019 12:14:16.978461  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt: {Name:mk478a139f219e0253a4433782505f57036a141f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:16.979250  295274 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key ...
	I1019 12:14:16.979270  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key: {Name:mk262cdae2421416cd180a921f27e81c3d2f5e0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:16.979923  295274 certs.go:257] generating profile certs ...
	I1019 12:14:16.979995  295274 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.key
	I1019 12:14:16.980016  295274 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt with IP's: []
	I1019 12:14:17.175754  295274 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt ...
	I1019 12:14:17.175788  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: {Name:mk49143f236dccb148777098ef32cfeedec13fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:17.175979  295274 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.key ...
	I1019 12:14:17.175996  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.key: {Name:mk45d333176afd19a6094b3d6823bdfa3b87aaab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:17.176734  295274 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.key.0b167051
	I1019 12:14:17.176759  295274 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.crt.0b167051 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1019 12:14:17.852454  295274 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.crt.0b167051 ...
	I1019 12:14:17.852485  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.crt.0b167051: {Name:mk0196843a6b59e70435c85f289b7fcb0e8b8230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:17.853355  295274 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.key.0b167051 ...
	I1019 12:14:17.853372  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.key.0b167051: {Name:mk17476e861b595ca5cc127a8d4936060a774bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:17.854045  295274 certs.go:382] copying /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.crt.0b167051 -> /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.crt
	I1019 12:14:17.854173  295274 certs.go:386] copying /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.key.0b167051 -> /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.key
	I1019 12:14:17.854235  295274 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/proxy-client.key
	I1019 12:14:17.854257  295274 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/proxy-client.crt with IP's: []
	I1019 12:14:19.142733  295274 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/proxy-client.crt ...
	I1019 12:14:19.142766  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/proxy-client.crt: {Name:mke18feaf7a8d5491aa718a872ccbfff12b25f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:19.143550  295274 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/proxy-client.key ...
	I1019 12:14:19.143570  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/proxy-client.key: {Name:mk7208500ffb6cf3608b744a96af205315fe241d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:19.144405  295274 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 12:14:19.144465  295274 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:14:19.144494  295274 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:14:19.144521  295274 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem (1679 bytes)
	I1019 12:14:19.145165  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:14:19.163832  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:14:19.183901  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:14:19.204575  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 12:14:19.224308  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 12:14:19.243250  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:14:19.260449  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:14:19.278256  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:14:19.296406  295274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:14:19.313736  295274 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:14:19.326189  295274 ssh_runner.go:195] Run: openssl version
	I1019 12:14:19.332244  295274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:14:19.340993  295274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:14:19.344493  295274 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:14:19.344557  295274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:14:19.387279  295274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:14:19.395469  295274 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:14:19.398969  295274 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 12:14:19.399021  295274 kubeadm.go:400] StartCluster: {Name:addons-694780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-694780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:14:19.399091  295274 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:14:19.399144  295274 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:14:19.425294  295274 cri.go:89] found id: ""
	I1019 12:14:19.425372  295274 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:14:19.432877  295274 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 12:14:19.440367  295274 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1019 12:14:19.440433  295274 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 12:14:19.448066  295274 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 12:14:19.448130  295274 kubeadm.go:157] found existing configuration files:
	
	I1019 12:14:19.448207  295274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 12:14:19.455735  295274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 12:14:19.455811  295274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 12:14:19.463349  295274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 12:14:19.470905  295274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 12:14:19.470989  295274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 12:14:19.478044  295274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 12:14:19.485424  295274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 12:14:19.485741  295274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 12:14:19.496364  295274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 12:14:19.503984  295274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 12:14:19.504078  295274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 12:14:19.511378  295274 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 12:14:19.578409  295274 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1019 12:14:19.578748  295274 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1019 12:14:19.650148  295274 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 12:14:37.399579  295274 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1019 12:14:37.399639  295274 kubeadm.go:318] [preflight] Running pre-flight checks
	I1019 12:14:37.399767  295274 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1019 12:14:37.399836  295274 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 12:14:37.399872  295274 kubeadm.go:318] OS: Linux
	I1019 12:14:37.399920  295274 kubeadm.go:318] CGROUPS_CPU: enabled
	I1019 12:14:37.399971  295274 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1019 12:14:37.400021  295274 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1019 12:14:37.400072  295274 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1019 12:14:37.400122  295274 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1019 12:14:37.400175  295274 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1019 12:14:37.400222  295274 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1019 12:14:37.400272  295274 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1019 12:14:37.400320  295274 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1019 12:14:37.400396  295274 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 12:14:37.400494  295274 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 12:14:37.400588  295274 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 12:14:37.400677  295274 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 12:14:37.403847  295274 out.go:252]   - Generating certificates and keys ...
	I1019 12:14:37.403937  295274 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1019 12:14:37.404010  295274 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1019 12:14:37.404091  295274 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 12:14:37.404155  295274 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1019 12:14:37.404221  295274 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1019 12:14:37.404278  295274 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1019 12:14:37.404338  295274 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1019 12:14:37.404462  295274 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-694780 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1019 12:14:37.404521  295274 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1019 12:14:37.404644  295274 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-694780 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1019 12:14:37.404715  295274 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 12:14:37.404785  295274 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 12:14:37.404842  295274 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1019 12:14:37.404901  295274 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 12:14:37.404960  295274 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 12:14:37.405024  295274 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 12:14:37.405086  295274 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 12:14:37.405158  295274 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 12:14:37.405220  295274 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 12:14:37.405309  295274 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 12:14:37.405382  295274 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 12:14:37.410146  295274 out.go:252]   - Booting up control plane ...
	I1019 12:14:37.410270  295274 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 12:14:37.410356  295274 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 12:14:37.410469  295274 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 12:14:37.410641  295274 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 12:14:37.410755  295274 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 12:14:37.410887  295274 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 12:14:37.410982  295274 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 12:14:37.411029  295274 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1019 12:14:37.411169  295274 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 12:14:37.411280  295274 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 12:14:37.411345  295274 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500795726s
	I1019 12:14:37.411445  295274 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 12:14:37.411532  295274 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1019 12:14:37.411628  295274 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 12:14:37.411713  295274 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 12:14:37.411795  295274 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.296248699s
	I1019 12:14:37.411868  295274 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.966396845s
	I1019 12:14:37.411942  295274 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502030707s
	I1019 12:14:37.412055  295274 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 12:14:37.412189  295274 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 12:14:37.412253  295274 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 12:14:37.412456  295274 kubeadm.go:318] [mark-control-plane] Marking the node addons-694780 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 12:14:37.412518  295274 kubeadm.go:318] [bootstrap-token] Using token: ye03ax.m6ox9zrlec6c94l4
	I1019 12:14:37.415499  295274 out.go:252]   - Configuring RBAC rules ...
	I1019 12:14:37.415668  295274 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 12:14:37.415778  295274 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 12:14:37.415942  295274 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 12:14:37.416076  295274 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 12:14:37.416196  295274 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 12:14:37.416285  295274 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 12:14:37.416405  295274 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 12:14:37.416450  295274 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1019 12:14:37.416498  295274 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1019 12:14:37.416502  295274 kubeadm.go:318] 
	I1019 12:14:37.416565  295274 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1019 12:14:37.416569  295274 kubeadm.go:318] 
	I1019 12:14:37.416649  295274 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1019 12:14:37.416653  295274 kubeadm.go:318] 
	I1019 12:14:37.416680  295274 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1019 12:14:37.416761  295274 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 12:14:37.416814  295274 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 12:14:37.416819  295274 kubeadm.go:318] 
	I1019 12:14:37.416876  295274 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1019 12:14:37.416879  295274 kubeadm.go:318] 
	I1019 12:14:37.416929  295274 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 12:14:37.416933  295274 kubeadm.go:318] 
	I1019 12:14:37.416987  295274 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1019 12:14:37.417065  295274 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 12:14:37.417136  295274 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 12:14:37.417141  295274 kubeadm.go:318] 
	I1019 12:14:37.417228  295274 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 12:14:37.417308  295274 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1019 12:14:37.417312  295274 kubeadm.go:318] 
	I1019 12:14:37.417400  295274 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ye03ax.m6ox9zrlec6c94l4 \
	I1019 12:14:37.417508  295274 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0ee0bbb0fbfe8419c71683408bd38502dbf921f3cb179cb0365daeb92f967309 \
	I1019 12:14:37.417529  295274 kubeadm.go:318] 	--control-plane 
	I1019 12:14:37.417533  295274 kubeadm.go:318] 
	I1019 12:14:37.417621  295274 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1019 12:14:37.417625  295274 kubeadm.go:318] 
	I1019 12:14:37.417762  295274 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ye03ax.m6ox9zrlec6c94l4 \
	I1019 12:14:37.417889  295274 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0ee0bbb0fbfe8419c71683408bd38502dbf921f3cb179cb0365daeb92f967309 
	I1019 12:14:37.417900  295274 cni.go:84] Creating CNI manager for ""
	I1019 12:14:37.417908  295274 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:14:37.420883  295274 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 12:14:37.423738  295274 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 12:14:37.428479  295274 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 12:14:37.428557  295274 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 12:14:37.442627  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 12:14:37.733181  295274 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 12:14:37.733266  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:37.733352  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-694780 minikube.k8s.io/updated_at=2025_10_19T12_14_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99 minikube.k8s.io/name=addons-694780 minikube.k8s.io/primary=true
	I1019 12:14:37.865478  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:37.865544  295274 ops.go:34] apiserver oom_adj: -16
	I1019 12:14:38.365656  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:38.865627  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:39.366041  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:39.866521  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:40.366046  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:40.866435  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:41.366263  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:41.865749  295274 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:14:42.046716  295274 kubeadm.go:1113] duration metric: took 4.313504373s to wait for elevateKubeSystemPrivileges
	I1019 12:14:42.046825  295274 kubeadm.go:402] duration metric: took 22.647794039s to StartCluster
	I1019 12:14:42.046884  295274 settings.go:142] acquiring lock: {Name:mk1099ab6cbf86eca031b5f8e2b43952c9c0f84f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:42.047667  295274 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 12:14:42.048191  295274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:14:42.049162  295274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 12:14:42.049503  295274 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:14:42.049655  295274 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1019 12:14:42.049753  295274 addons.go:69] Setting yakd=true in profile "addons-694780"
	I1019 12:14:42.049767  295274 addons.go:238] Setting addon yakd=true in "addons-694780"
	I1019 12:14:42.049790  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.050262  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.049616  295274 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:14:42.050832  295274 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-694780"
	I1019 12:14:42.050848  295274 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-694780"
	I1019 12:14:42.050872  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.051290  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.054229  295274 addons.go:69] Setting cloud-spanner=true in profile "addons-694780"
	I1019 12:14:42.054318  295274 addons.go:238] Setting addon cloud-spanner=true in "addons-694780"
	I1019 12:14:42.054409  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.055024  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.055671  295274 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-694780"
	I1019 12:14:42.055716  295274 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-694780"
	I1019 12:14:42.055752  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.056172  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.057603  295274 out.go:179] * Verifying Kubernetes components...
	I1019 12:14:42.057948  295274 addons.go:69] Setting storage-provisioner=true in profile "addons-694780"
	I1019 12:14:42.057976  295274 addons.go:238] Setting addon storage-provisioner=true in "addons-694780"
	I1019 12:14:42.058019  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.058463  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.068714  295274 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-694780"
	I1019 12:14:42.069042  295274 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-694780"
	I1019 12:14:42.068903  295274 addons.go:69] Setting volcano=true in profile "addons-694780"
	I1019 12:14:42.069314  295274 addons.go:238] Setting addon volcano=true in "addons-694780"
	I1019 12:14:42.069358  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.069885  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.081649  295274 addons.go:69] Setting default-storageclass=true in profile "addons-694780"
	I1019 12:14:42.081761  295274 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-694780"
	I1019 12:14:42.082176  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.068919  295274 addons.go:69] Setting volumesnapshots=true in profile "addons-694780"
	I1019 12:14:42.083054  295274 addons.go:238] Setting addon volumesnapshots=true in "addons-694780"
	I1019 12:14:42.086299  295274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:14:42.087238  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.087560  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.094842  295274 addons.go:69] Setting gcp-auth=true in profile "addons-694780"
	I1019 12:14:42.095131  295274 mustload.go:65] Loading cluster: addons-694780
	I1019 12:14:42.095673  295274 addons.go:69] Setting ingress=true in profile "addons-694780"
	I1019 12:14:42.095706  295274 addons.go:238] Setting addon ingress=true in "addons-694780"
	I1019 12:14:42.095751  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.096232  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.104439  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.123857  295274 addons.go:69] Setting ingress-dns=true in profile "addons-694780"
	I1019 12:14:42.123899  295274 addons.go:238] Setting addon ingress-dns=true in "addons-694780"
	I1019 12:14:42.123947  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.124445  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.128186  295274 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:14:42.128585  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.143647  295274 addons.go:69] Setting inspektor-gadget=true in profile "addons-694780"
	I1019 12:14:42.143694  295274 addons.go:238] Setting addon inspektor-gadget=true in "addons-694780"
	I1019 12:14:42.143734  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.144229  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.172000  295274 addons.go:69] Setting metrics-server=true in profile "addons-694780"
	I1019 12:14:42.172033  295274 addons.go:238] Setting addon metrics-server=true in "addons-694780"
	I1019 12:14:42.172081  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.172616  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.211571  295274 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-694780"
	I1019 12:14:42.211604  295274 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-694780"
	I1019 12:14:42.211648  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.212425  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.234780  295274 addons.go:69] Setting registry=true in profile "addons-694780"
	I1019 12:14:42.234824  295274 addons.go:238] Setting addon registry=true in "addons-694780"
	I1019 12:14:42.234865  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.235384  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.257240  295274 addons.go:69] Setting registry-creds=true in profile "addons-694780"
	I1019 12:14:42.257271  295274 addons.go:238] Setting addon registry-creds=true in "addons-694780"
	I1019 12:14:42.257321  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.257910  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.304452  295274 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1019 12:14:42.310048  295274 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 12:14:42.310075  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1019 12:14:42.310142  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.327149  295274 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1019 12:14:42.327273  295274 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	W1019 12:14:42.349904  295274 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1019 12:14:42.356888  295274 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1019 12:14:42.356913  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1019 12:14:42.356978  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.373431  295274 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 12:14:42.373550  295274 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1019 12:14:42.377344  295274 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 12:14:42.377489  295274 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1019 12:14:42.377504  295274 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1019 12:14:42.377570  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.380304  295274 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 12:14:42.380326  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1019 12:14:42.380388  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.390018  295274 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1019 12:14:42.424298  295274 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:14:42.434986  295274 out.go:179]   - Using image docker.io/registry:3.0.0
	I1019 12:14:42.435118  295274 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:14:42.435130  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:14:42.435228  295274 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1019 12:14:42.435247  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.450638  295274 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1019 12:14:42.450764  295274 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1019 12:14:42.451023  295274 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 12:14:42.451062  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1019 12:14:42.451158  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.465959  295274 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1019 12:14:42.466030  295274 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1019 12:14:42.466126  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.480125  295274 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-694780"
	I1019 12:14:42.480179  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.480602  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.495819  295274 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1019 12:14:42.496931  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1019 12:14:42.497039  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.504172  295274 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1019 12:14:42.513109  295274 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1019 12:14:42.513138  295274 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1019 12:14:42.513210  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.553875  295274 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1019 12:14:42.554085  295274 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1019 12:14:42.554130  295274 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1019 12:14:42.554241  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.554636  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.554666  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.555560  295274 addons.go:238] Setting addon default-storageclass=true in "addons-694780"
	I1019 12:14:42.561866  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:42.562066  295274 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1019 12:14:42.562376  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:42.562655  295274 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1019 12:14:42.562670  295274 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1019 12:14:42.562725  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.555762  295274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 12:14:42.582623  295274 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 12:14:42.582643  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1019 12:14:42.582701  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.583213  295274 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1019 12:14:42.586736  295274 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1019 12:14:42.594936  295274 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1019 12:14:42.596347  295274 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 12:14:42.596364  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1019 12:14:42.596433  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.613346  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.614178  295274 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1019 12:14:42.617594  295274 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1019 12:14:42.624156  295274 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1019 12:14:42.628362  295274 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1019 12:14:42.628390  295274 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1019 12:14:42.628458  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.637110  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.650493  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.666049  295274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:14:42.691835  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.708777  295274 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1019 12:14:42.715731  295274 out.go:179]   - Using image docker.io/busybox:stable
	I1019 12:14:42.718594  295274 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 12:14:42.718612  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1019 12:14:42.718685  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.734783  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.762208  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.768081  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.772404  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.776636  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	W1019 12:14:42.780525  295274 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1019 12:14:42.780559  295274 retry.go:31] will retry after 142.015581ms: ssh: handshake failed: EOF
	I1019 12:14:42.791076  295274 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:14:42.791097  295274 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:14:42.791157  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:42.807344  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.832094  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.838779  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:42.855571  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	W1019 12:14:42.870667  295274 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1019 12:14:42.870699  295274 retry.go:31] will retry after 242.668209ms: ssh: handshake failed: EOF
	W1019 12:14:42.923492  295274 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1019 12:14:42.923519  295274 retry.go:31] will retry after 511.228858ms: ssh: handshake failed: EOF
	I1019 12:14:43.125434  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 12:14:43.310785  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:14:43.315491  295274 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1019 12:14:43.315516  295274 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1019 12:14:43.397009  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 12:14:43.459067  295274 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1019 12:14:43.459139  295274 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1019 12:14:43.461552  295274 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:14:43.461613  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1019 12:14:43.463547  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1019 12:14:43.484201  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 12:14:43.495509  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 12:14:43.497947  295274 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1019 12:14:43.498023  295274 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1019 12:14:43.531347  295274 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1019 12:14:43.531436  295274 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1019 12:14:43.588287  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:14:43.596328  295274 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1019 12:14:43.596395  295274 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1019 12:14:43.598543  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 12:14:43.600233  295274 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1019 12:14:43.600289  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1019 12:14:43.637199  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:14:43.653811  295274 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1019 12:14:43.653886  295274 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1019 12:14:43.699271  295274 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1019 12:14:43.699344  295274 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1019 12:14:43.714852  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 12:14:43.730720  295274 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1019 12:14:43.730794  295274 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1019 12:14:43.750100  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1019 12:14:43.816953  295274 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.237037083s)
	I1019 12:14:43.816978  295274 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1019 12:14:43.817940  295274 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.151870059s)
	I1019 12:14:43.818559  295274 node_ready.go:35] waiting up to 6m0s for node "addons-694780" to be "Ready" ...
	I1019 12:14:43.827978  295274 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1019 12:14:43.828044  295274 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1019 12:14:43.944118  295274 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1019 12:14:43.944184  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1019 12:14:43.979810  295274 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1019 12:14:43.979874  295274 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1019 12:14:44.081228  295274 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1019 12:14:44.081294  295274 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1019 12:14:44.147101  295274 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1019 12:14:44.147173  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1019 12:14:44.149652  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1019 12:14:44.159044  295274 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1019 12:14:44.159110  295274 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1019 12:14:44.228229  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.102748298s)
	I1019 12:14:44.246408  295274 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 12:14:44.246493  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1019 12:14:44.322723  295274 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-694780" context rescaled to 1 replicas
	I1019 12:14:44.332453  295274 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1019 12:14:44.332518  295274 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1019 12:14:44.336837  295274 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1019 12:14:44.336910  295274 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1019 12:14:44.354259  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 12:14:44.505302  295274 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1019 12:14:44.505382  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1019 12:14:44.517502  295274 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 12:14:44.517569  295274 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1019 12:14:44.676881  295274 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1019 12:14:44.676908  295274 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1019 12:14:44.687278  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 12:14:44.830409  295274 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1019 12:14:44.830439  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1019 12:14:45.178510  295274 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1019 12:14:45.178540  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1019 12:14:45.401247  295274 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1019 12:14:45.401278  295274 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1019 12:14:45.695397  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1019 12:14:45.866520  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:14:46.658762  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.261721725s)
	I1019 12:14:46.658825  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.195215431s)
	I1019 12:14:46.658866  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.174595059s)
	I1019 12:14:46.658960  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.348135455s)
	I1019 12:14:48.322222  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.826630978s)
	I1019 12:14:48.322258  295274 addons.go:479] Verifying addon ingress=true in "addons-694780"
	I1019 12:14:48.322423  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.734060459s)
	I1019 12:14:48.322666  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.72406112s)
	I1019 12:14:48.322777  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.685505354s)
	W1019 12:14:48.322798  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:48.322812  295274 retry.go:31] will retry after 194.823768ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:48.322866  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.607949186s)
	I1019 12:14:48.323006  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.572834224s)
	I1019 12:14:48.323023  295274 addons.go:479] Verifying addon registry=true in "addons-694780"
	I1019 12:14:48.323498  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.173752492s)
	I1019 12:14:48.323663  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.96932715s)
	W1019 12:14:48.324592  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 12:14:48.324623  295274 retry.go:31] will retry after 237.421838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 12:14:48.323753  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.636441908s)
	I1019 12:14:48.324644  295274 addons.go:479] Verifying addon metrics-server=true in "addons-694780"
	I1019 12:14:48.325525  295274 out.go:179] * Verifying ingress addon...
	I1019 12:14:48.327694  295274 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-694780 service yakd-dashboard -n yakd-dashboard
	
	I1019 12:14:48.327731  295274 out.go:179] * Verifying registry addon...
	W1019 12:14:48.329194  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:14:48.330875  295274 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1019 12:14:48.336333  295274 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1019 12:14:48.347942  295274 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 12:14:48.347969  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:48.348112  295274 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1019 12:14:48.348126  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:14:48.358956  295274 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1019 12:14:48.518591  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:14:48.562921  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 12:14:48.601822  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.906377582s)
	I1019 12:14:48.601854  295274 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-694780"
	I1019 12:14:48.605106  295274 out.go:179] * Verifying csi-hostpath-driver addon...
	I1019 12:14:48.607924  295274 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1019 12:14:48.622940  295274 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 12:14:48.622967  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:48.834337  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:48.839004  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:49.117097  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:49.342167  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:49.343826  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:49.611923  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:49.629810  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.111127425s)
	W1019 12:14:49.629890  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:49.629916  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.066955932s)
	I1019 12:14:49.629924  295274 retry.go:31] will retry after 448.155711ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:49.834440  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:49.839467  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:50.079158  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:14:50.111958  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:50.171043  295274 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1019 12:14:50.171196  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:50.194357  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:50.335076  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:50.339667  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:50.347272  295274 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1019 12:14:50.360621  295274 addons.go:238] Setting addon gcp-auth=true in "addons-694780"
	I1019 12:14:50.360673  295274 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:14:50.361113  295274 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:14:50.399436  295274 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1019 12:14:50.399494  295274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:14:50.422224  295274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:14:50.611999  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:14:50.821363  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:14:50.835258  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:50.839415  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1019 12:14:50.961482  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:50.961566  295274 retry.go:31] will retry after 562.795912ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:50.965219  295274 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 12:14:50.968201  295274 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1019 12:14:50.971165  295274 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1019 12:14:50.971188  295274 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1019 12:14:50.985512  295274 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1019 12:14:50.985538  295274 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1019 12:14:50.998282  295274 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 12:14:50.998310  295274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1019 12:14:51.013425  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 12:14:51.111969  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:51.338527  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:51.340021  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:51.497723  295274 addons.go:479] Verifying addon gcp-auth=true in "addons-694780"
	I1019 12:14:51.502848  295274 out.go:179] * Verifying gcp-auth addon...
	I1019 12:14:51.506527  295274 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1019 12:14:51.514789  295274 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1019 12:14:51.514859  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:51.524977  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:14:51.611173  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:51.834593  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:51.839147  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:52.010555  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:52.111504  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:14:52.306240  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:52.306275  295274 retry.go:31] will retry after 524.797045ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:52.333921  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:52.339317  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:52.509761  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:52.612075  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:14:52.823142  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:14:52.831466  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:14:52.835433  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:52.839333  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:53.009565  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:53.111424  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:53.334733  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:53.339441  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:53.510702  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:53.611605  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:14:53.659425  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:53.659460  295274 retry.go:31] will retry after 1.836989408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:53.834283  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:53.839853  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:54.010116  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:54.110937  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:54.334075  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:54.339825  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:54.509673  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:54.611831  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:54.834681  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:54.839492  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:55.010863  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:55.111517  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:14:55.321578  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:14:55.334921  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:55.339419  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:55.496920  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:14:55.510082  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:55.611537  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:55.836283  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:55.839769  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:56.016846  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:56.111318  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:14:56.300093  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:56.300129  295274 retry.go:31] will retry after 1.362357652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:56.335930  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:56.345880  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:56.510118  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:56.611253  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:56.835231  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:56.839841  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:57.010173  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:57.110836  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:14:57.323574  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:14:57.334934  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:57.339528  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:57.509535  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:57.611453  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:57.662711  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:14:57.834131  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:57.839700  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:58.010394  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:58.111507  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:58.334777  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:58.339543  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1019 12:14:58.484484  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:58.484514  295274 retry.go:31] will retry after 2.965888162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:14:58.509364  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:58.611405  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:58.834448  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:58.839295  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:59.010258  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:59.111591  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:14:59.333744  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:59.339693  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:14:59.509279  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:14:59.611044  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:14:59.822164  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:14:59.834075  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:14:59.839918  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:00.014436  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:00.137612  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:00.346105  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:00.346184  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:00.512182  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:00.612107  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:00.835582  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:00.840655  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:01.010251  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:01.113264  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:01.334837  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:01.340141  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:01.451502  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:15:01.511042  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:01.611694  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:01.822206  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:15:01.834513  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:01.839504  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:02.010464  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:02.112347  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:02.272774  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:02.272806  295274 retry.go:31] will retry after 5.185809845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:02.335204  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:02.340165  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:02.510448  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:02.611397  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:02.834905  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:02.839914  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:03.009854  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:03.110900  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:03.334543  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:03.339352  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:03.510544  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:03.612023  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:03.834695  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:03.839403  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:04.011189  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:04.110972  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:04.322178  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:15:04.334394  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:04.339846  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:04.509808  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:04.611615  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:04.834767  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:04.839650  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:05.009751  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:05.111997  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:05.334254  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:05.340131  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:05.510105  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:05.612166  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:05.834315  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:05.839111  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:06.010405  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:06.111489  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:06.322749  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:15:06.334602  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:06.339455  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:06.510321  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:06.611003  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:06.834746  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:06.839227  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:07.009494  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:07.111206  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:07.334116  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:07.340085  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:07.459331  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:15:07.510035  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:07.611686  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:07.834914  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:07.839475  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:08.010197  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:08.111747  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:08.253798  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:08.253829  295274 retry.go:31] will retry after 6.015658051s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 12:15:08.323676  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:15:08.338520  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:08.339961  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:08.513176  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:08.610999  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:08.834948  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:08.839483  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:09.009598  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:09.111374  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:09.334599  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:09.339052  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:09.510174  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:09.611196  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:09.833973  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:09.839451  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:10.010657  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:10.111848  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:10.334616  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:10.339090  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:10.510016  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:10.610873  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:10.821752  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:15:10.835026  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:10.839602  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:11.010023  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:11.112154  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:11.334837  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:11.339761  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:11.509646  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:11.611417  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:11.833853  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:11.839625  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:12.010774  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:12.111823  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:12.334724  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:12.338999  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:12.510114  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:12.611231  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:12.822184  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:15:12.834394  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:12.839964  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:13.010051  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:13.110954  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:13.333975  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:13.339892  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:13.510124  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:13.610802  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:13.834692  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:13.839597  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:14.010028  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:14.112310  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:14.270605  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:15:14.334742  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:14.339714  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:14.509884  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:14.611890  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:14.822591  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:15:14.835698  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:14.839454  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:15.012928  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:15:15.081484  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:15.081520  295274 retry.go:31] will retry after 5.916791874s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:15.112059  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:15.333631  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:15.339577  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:15.509913  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:15.610874  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:15.833785  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:15.839141  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:16.010236  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:16.111511  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:16.339018  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:16.340386  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:16.509283  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:16.611216  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:16.834127  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:16.839726  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:17.009879  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:17.111951  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:17.321838  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:15:17.334692  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:17.339176  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:17.510153  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:17.611120  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:17.833814  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:17.839080  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:18.011212  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:18.111082  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:18.334085  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:18.340533  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:18.509531  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:18.611575  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:18.834025  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:18.839604  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:19.009429  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:19.111407  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:19.322232  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	I1019 12:15:19.333928  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:19.339329  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:19.509502  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:19.611264  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:19.833825  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:19.839271  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:20.009919  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:20.111843  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:20.334651  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:20.339136  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:20.509971  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:20.610739  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:20.835134  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:20.839798  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:20.999297  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:15:21.010610  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:21.112424  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:21.335235  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:21.340651  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:21.510188  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:21.611983  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:15:21.826926  295274 node_ready.go:57] node "addons-694780" has "Ready":"False" status (will retry)
	W1019 12:15:21.832642  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:21.832712  295274 retry.go:31] will retry after 9.288094305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:21.835001  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:21.839909  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:22.010506  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:22.111224  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:22.334358  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:22.339893  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:22.510141  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:22.611009  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:22.834468  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:22.839869  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:23.033371  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:23.117072  295274 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 12:15:23.117092  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:23.378865  295274 node_ready.go:49] node "addons-694780" is "Ready"
	I1019 12:15:23.378892  295274 node_ready.go:38] duration metric: took 39.560317036s for node "addons-694780" to be "Ready" ...
	I1019 12:15:23.378908  295274 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:15:23.378966  295274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:15:23.384532  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:23.404228  295274 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 12:15:23.404253  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:23.409924  295274 api_server.go:72] duration metric: took 41.359450098s to wait for apiserver process to appear ...
	I1019 12:15:23.409950  295274 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:15:23.409971  295274 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1019 12:15:23.423321  295274 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1019 12:15:23.424534  295274 api_server.go:141] control plane version: v1.34.1
	I1019 12:15:23.424561  295274 api_server.go:131] duration metric: took 14.603146ms to wait for apiserver health ...
	I1019 12:15:23.424570  295274 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:15:23.441169  295274 system_pods.go:59] 19 kube-system pods found
	I1019 12:15:23.441213  295274 system_pods.go:61] "coredns-66bc5c9577-pmnfn" [bec1ffaa-adfa-4ec0-8900-094eb23c474c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:15:23.441223  295274 system_pods.go:61] "csi-hostpath-attacher-0" [f8da0a80-81fe-45d9-9bc4-546a88956349] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 12:15:23.441232  295274 system_pods.go:61] "csi-hostpath-resizer-0" [c8b31bdd-8168-41a1-8c0a-df79aea585b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 12:15:23.441237  295274 system_pods.go:61] "csi-hostpathplugin-qx76c" [06a5da30-8f06-481f-b8d9-f7c68e9dc1a5] Pending
	I1019 12:15:23.441244  295274 system_pods.go:61] "etcd-addons-694780" [58288863-2f47-4ab4-afeb-15a2a0cc2b72] Running
	I1019 12:15:23.441249  295274 system_pods.go:61] "kindnet-hbjtx" [17a70783-7bb2-4e04-87ff-29e9ae6157ec] Running
	I1019 12:15:23.441259  295274 system_pods.go:61] "kube-apiserver-addons-694780" [b8cf8d39-f915-4a03-b260-b53beeaa93ab] Running
	I1019 12:15:23.441264  295274 system_pods.go:61] "kube-controller-manager-addons-694780" [b9b890a0-4020-4659-a97d-606961e57787] Running
	I1019 12:15:23.441275  295274 system_pods.go:61] "kube-ingress-dns-minikube" [efc9b336-ddb4-4c69-9439-2a2d7435f8fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:15:23.441288  295274 system_pods.go:61] "kube-proxy-g2s4z" [2e13f778-44e4-41ee-b5dd-74ecd5c6ba75] Running
	I1019 12:15:23.441293  295274 system_pods.go:61] "kube-scheduler-addons-694780" [9ea837f2-390c-41a1-a839-836b1e1d5e70] Running
	I1019 12:15:23.441298  295274 system_pods.go:61] "metrics-server-85b7d694d7-qjfpt" [5a14d2c0-b959-4c84-86d6-2921e765a741] Pending
	I1019 12:15:23.441303  295274 system_pods.go:61] "nvidia-device-plugin-daemonset-rl6ct" [1169929a-70c6-44e8-a514-f532fb25a448] Pending
	I1019 12:15:23.441315  295274 system_pods.go:61] "registry-6b586f9694-cz995" [13fda06a-9f49-47a0-9b61-d3a6269e5357] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:15:23.441321  295274 system_pods.go:61] "registry-creds-764b6fb674-c7zhl" [13adddb6-d4bf-4eff-8eef-f96cbd11e787] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:15:23.441335  295274 system_pods.go:61] "registry-proxy-4r8wk" [0e1da561-db9d-4edf-ada6-d637df7913be] Pending
	I1019 12:15:23.441343  295274 system_pods.go:61] "snapshot-controller-7d9fbc56b8-slbnx" [a974aadd-de01-4b77-a455-661a00173306] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:23.441350  295274 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tpk9s" [8642e08b-acc7-4205-a1bf-ded7ee16625c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:23.441359  295274 system_pods.go:61] "storage-provisioner" [1608e4fc-9b1c-4b5e-bc5d-d20a14adf01d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:15:23.441367  295274 system_pods.go:74] duration metric: took 16.79045ms to wait for pod list to return data ...
	I1019 12:15:23.441383  295274 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:15:23.445300  295274 default_sa.go:45] found service account: "default"
	I1019 12:15:23.445327  295274 default_sa.go:55] duration metric: took 3.938468ms for default service account to be created ...
	I1019 12:15:23.445338  295274 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:15:23.460218  295274 system_pods.go:86] 19 kube-system pods found
	I1019 12:15:23.460257  295274 system_pods.go:89] "coredns-66bc5c9577-pmnfn" [bec1ffaa-adfa-4ec0-8900-094eb23c474c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:15:23.460267  295274 system_pods.go:89] "csi-hostpath-attacher-0" [f8da0a80-81fe-45d9-9bc4-546a88956349] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 12:15:23.460276  295274 system_pods.go:89] "csi-hostpath-resizer-0" [c8b31bdd-8168-41a1-8c0a-df79aea585b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 12:15:23.460280  295274 system_pods.go:89] "csi-hostpathplugin-qx76c" [06a5da30-8f06-481f-b8d9-f7c68e9dc1a5] Pending
	I1019 12:15:23.460285  295274 system_pods.go:89] "etcd-addons-694780" [58288863-2f47-4ab4-afeb-15a2a0cc2b72] Running
	I1019 12:15:23.460290  295274 system_pods.go:89] "kindnet-hbjtx" [17a70783-7bb2-4e04-87ff-29e9ae6157ec] Running
	I1019 12:15:23.460299  295274 system_pods.go:89] "kube-apiserver-addons-694780" [b8cf8d39-f915-4a03-b260-b53beeaa93ab] Running
	I1019 12:15:23.460305  295274 system_pods.go:89] "kube-controller-manager-addons-694780" [b9b890a0-4020-4659-a97d-606961e57787] Running
	I1019 12:15:23.460314  295274 system_pods.go:89] "kube-ingress-dns-minikube" [efc9b336-ddb4-4c69-9439-2a2d7435f8fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:15:23.460319  295274 system_pods.go:89] "kube-proxy-g2s4z" [2e13f778-44e4-41ee-b5dd-74ecd5c6ba75] Running
	I1019 12:15:23.460329  295274 system_pods.go:89] "kube-scheduler-addons-694780" [9ea837f2-390c-41a1-a839-836b1e1d5e70] Running
	I1019 12:15:23.460333  295274 system_pods.go:89] "metrics-server-85b7d694d7-qjfpt" [5a14d2c0-b959-4c84-86d6-2921e765a741] Pending
	I1019 12:15:23.460337  295274 system_pods.go:89] "nvidia-device-plugin-daemonset-rl6ct" [1169929a-70c6-44e8-a514-f532fb25a448] Pending
	I1019 12:15:23.460343  295274 system_pods.go:89] "registry-6b586f9694-cz995" [13fda06a-9f49-47a0-9b61-d3a6269e5357] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:15:23.460353  295274 system_pods.go:89] "registry-creds-764b6fb674-c7zhl" [13adddb6-d4bf-4eff-8eef-f96cbd11e787] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:15:23.460358  295274 system_pods.go:89] "registry-proxy-4r8wk" [0e1da561-db9d-4edf-ada6-d637df7913be] Pending
	I1019 12:15:23.460364  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-slbnx" [a974aadd-de01-4b77-a455-661a00173306] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:23.460375  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tpk9s" [8642e08b-acc7-4205-a1bf-ded7ee16625c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:23.460381  295274 system_pods.go:89] "storage-provisioner" [1608e4fc-9b1c-4b5e-bc5d-d20a14adf01d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:15:23.460397  295274 retry.go:31] will retry after 255.328592ms: missing components: kube-dns
	I1019 12:15:23.523688  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:23.617333  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:23.736116  295274 system_pods.go:86] 19 kube-system pods found
	I1019 12:15:23.736166  295274 system_pods.go:89] "coredns-66bc5c9577-pmnfn" [bec1ffaa-adfa-4ec0-8900-094eb23c474c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:15:23.736175  295274 system_pods.go:89] "csi-hostpath-attacher-0" [f8da0a80-81fe-45d9-9bc4-546a88956349] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 12:15:23.736185  295274 system_pods.go:89] "csi-hostpath-resizer-0" [c8b31bdd-8168-41a1-8c0a-df79aea585b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 12:15:23.736193  295274 system_pods.go:89] "csi-hostpathplugin-qx76c" [06a5da30-8f06-481f-b8d9-f7c68e9dc1a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 12:15:23.736198  295274 system_pods.go:89] "etcd-addons-694780" [58288863-2f47-4ab4-afeb-15a2a0cc2b72] Running
	I1019 12:15:23.736203  295274 system_pods.go:89] "kindnet-hbjtx" [17a70783-7bb2-4e04-87ff-29e9ae6157ec] Running
	I1019 12:15:23.736213  295274 system_pods.go:89] "kube-apiserver-addons-694780" [b8cf8d39-f915-4a03-b260-b53beeaa93ab] Running
	I1019 12:15:23.736218  295274 system_pods.go:89] "kube-controller-manager-addons-694780" [b9b890a0-4020-4659-a97d-606961e57787] Running
	I1019 12:15:23.736229  295274 system_pods.go:89] "kube-ingress-dns-minikube" [efc9b336-ddb4-4c69-9439-2a2d7435f8fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:15:23.736233  295274 system_pods.go:89] "kube-proxy-g2s4z" [2e13f778-44e4-41ee-b5dd-74ecd5c6ba75] Running
	I1019 12:15:23.736238  295274 system_pods.go:89] "kube-scheduler-addons-694780" [9ea837f2-390c-41a1-a839-836b1e1d5e70] Running
	I1019 12:15:23.736244  295274 system_pods.go:89] "metrics-server-85b7d694d7-qjfpt" [5a14d2c0-b959-4c84-86d6-2921e765a741] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:15:23.736257  295274 system_pods.go:89] "nvidia-device-plugin-daemonset-rl6ct" [1169929a-70c6-44e8-a514-f532fb25a448] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 12:15:23.736266  295274 system_pods.go:89] "registry-6b586f9694-cz995" [13fda06a-9f49-47a0-9b61-d3a6269e5357] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:15:23.736277  295274 system_pods.go:89] "registry-creds-764b6fb674-c7zhl" [13adddb6-d4bf-4eff-8eef-f96cbd11e787] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:15:23.736283  295274 system_pods.go:89] "registry-proxy-4r8wk" [0e1da561-db9d-4edf-ada6-d637df7913be] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 12:15:23.736290  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-slbnx" [a974aadd-de01-4b77-a455-661a00173306] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:23.736297  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tpk9s" [8642e08b-acc7-4205-a1bf-ded7ee16625c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:23.736310  295274 system_pods.go:89] "storage-provisioner" [1608e4fc-9b1c-4b5e-bc5d-d20a14adf01d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:15:23.736330  295274 retry.go:31] will retry after 304.376177ms: missing components: kube-dns
	I1019 12:15:23.837198  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:23.937744  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:24.039000  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:24.048786  295274 system_pods.go:86] 19 kube-system pods found
	I1019 12:15:24.048827  295274 system_pods.go:89] "coredns-66bc5c9577-pmnfn" [bec1ffaa-adfa-4ec0-8900-094eb23c474c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:15:24.048837  295274 system_pods.go:89] "csi-hostpath-attacher-0" [f8da0a80-81fe-45d9-9bc4-546a88956349] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 12:15:24.048844  295274 system_pods.go:89] "csi-hostpath-resizer-0" [c8b31bdd-8168-41a1-8c0a-df79aea585b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 12:15:24.048850  295274 system_pods.go:89] "csi-hostpathplugin-qx76c" [06a5da30-8f06-481f-b8d9-f7c68e9dc1a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 12:15:24.048855  295274 system_pods.go:89] "etcd-addons-694780" [58288863-2f47-4ab4-afeb-15a2a0cc2b72] Running
	I1019 12:15:24.048861  295274 system_pods.go:89] "kindnet-hbjtx" [17a70783-7bb2-4e04-87ff-29e9ae6157ec] Running
	I1019 12:15:24.048869  295274 system_pods.go:89] "kube-apiserver-addons-694780" [b8cf8d39-f915-4a03-b260-b53beeaa93ab] Running
	I1019 12:15:24.048874  295274 system_pods.go:89] "kube-controller-manager-addons-694780" [b9b890a0-4020-4659-a97d-606961e57787] Running
	I1019 12:15:24.048886  295274 system_pods.go:89] "kube-ingress-dns-minikube" [efc9b336-ddb4-4c69-9439-2a2d7435f8fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:15:24.048890  295274 system_pods.go:89] "kube-proxy-g2s4z" [2e13f778-44e4-41ee-b5dd-74ecd5c6ba75] Running
	I1019 12:15:24.048895  295274 system_pods.go:89] "kube-scheduler-addons-694780" [9ea837f2-390c-41a1-a839-836b1e1d5e70] Running
	I1019 12:15:24.048901  295274 system_pods.go:89] "metrics-server-85b7d694d7-qjfpt" [5a14d2c0-b959-4c84-86d6-2921e765a741] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:15:24.048912  295274 system_pods.go:89] "nvidia-device-plugin-daemonset-rl6ct" [1169929a-70c6-44e8-a514-f532fb25a448] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 12:15:24.048920  295274 system_pods.go:89] "registry-6b586f9694-cz995" [13fda06a-9f49-47a0-9b61-d3a6269e5357] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:15:24.048938  295274 system_pods.go:89] "registry-creds-764b6fb674-c7zhl" [13adddb6-d4bf-4eff-8eef-f96cbd11e787] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:15:24.048944  295274 system_pods.go:89] "registry-proxy-4r8wk" [0e1da561-db9d-4edf-ada6-d637df7913be] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 12:15:24.048951  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-slbnx" [a974aadd-de01-4b77-a455-661a00173306] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:24.048960  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tpk9s" [8642e08b-acc7-4205-a1bf-ded7ee16625c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:24.048966  295274 system_pods.go:89] "storage-provisioner" [1608e4fc-9b1c-4b5e-bc5d-d20a14adf01d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:15:24.048988  295274 retry.go:31] will retry after 401.197866ms: missing components: kube-dns
	I1019 12:15:24.140406  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:24.335503  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:24.339440  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:24.456162  295274 system_pods.go:86] 19 kube-system pods found
	I1019 12:15:24.456197  295274 system_pods.go:89] "coredns-66bc5c9577-pmnfn" [bec1ffaa-adfa-4ec0-8900-094eb23c474c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:15:24.456206  295274 system_pods.go:89] "csi-hostpath-attacher-0" [f8da0a80-81fe-45d9-9bc4-546a88956349] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 12:15:24.456214  295274 system_pods.go:89] "csi-hostpath-resizer-0" [c8b31bdd-8168-41a1-8c0a-df79aea585b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 12:15:24.456220  295274 system_pods.go:89] "csi-hostpathplugin-qx76c" [06a5da30-8f06-481f-b8d9-f7c68e9dc1a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 12:15:24.456224  295274 system_pods.go:89] "etcd-addons-694780" [58288863-2f47-4ab4-afeb-15a2a0cc2b72] Running
	I1019 12:15:24.456230  295274 system_pods.go:89] "kindnet-hbjtx" [17a70783-7bb2-4e04-87ff-29e9ae6157ec] Running
	I1019 12:15:24.456235  295274 system_pods.go:89] "kube-apiserver-addons-694780" [b8cf8d39-f915-4a03-b260-b53beeaa93ab] Running
	I1019 12:15:24.456244  295274 system_pods.go:89] "kube-controller-manager-addons-694780" [b9b890a0-4020-4659-a97d-606961e57787] Running
	I1019 12:15:24.456437  295274 system_pods.go:89] "kube-ingress-dns-minikube" [efc9b336-ddb4-4c69-9439-2a2d7435f8fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:15:24.456454  295274 system_pods.go:89] "kube-proxy-g2s4z" [2e13f778-44e4-41ee-b5dd-74ecd5c6ba75] Running
	I1019 12:15:24.456463  295274 system_pods.go:89] "kube-scheduler-addons-694780" [9ea837f2-390c-41a1-a839-836b1e1d5e70] Running
	I1019 12:15:24.456471  295274 system_pods.go:89] "metrics-server-85b7d694d7-qjfpt" [5a14d2c0-b959-4c84-86d6-2921e765a741] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:15:24.456484  295274 system_pods.go:89] "nvidia-device-plugin-daemonset-rl6ct" [1169929a-70c6-44e8-a514-f532fb25a448] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 12:15:24.456492  295274 system_pods.go:89] "registry-6b586f9694-cz995" [13fda06a-9f49-47a0-9b61-d3a6269e5357] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:15:24.456499  295274 system_pods.go:89] "registry-creds-764b6fb674-c7zhl" [13adddb6-d4bf-4eff-8eef-f96cbd11e787] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:15:24.456508  295274 system_pods.go:89] "registry-proxy-4r8wk" [0e1da561-db9d-4edf-ada6-d637df7913be] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 12:15:24.456516  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-slbnx" [a974aadd-de01-4b77-a455-661a00173306] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:24.456526  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tpk9s" [8642e08b-acc7-4205-a1bf-ded7ee16625c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:24.456533  295274 system_pods.go:89] "storage-provisioner" [1608e4fc-9b1c-4b5e-bc5d-d20a14adf01d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:15:24.456563  295274 retry.go:31] will retry after 379.97275ms: missing components: kube-dns
	I1019 12:15:24.509665  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:24.611932  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:24.835072  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:24.842364  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:24.842698  295274 system_pods.go:86] 19 kube-system pods found
	I1019 12:15:24.842722  295274 system_pods.go:89] "coredns-66bc5c9577-pmnfn" [bec1ffaa-adfa-4ec0-8900-094eb23c474c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:15:24.842730  295274 system_pods.go:89] "csi-hostpath-attacher-0" [f8da0a80-81fe-45d9-9bc4-546a88956349] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 12:15:24.842752  295274 system_pods.go:89] "csi-hostpath-resizer-0" [c8b31bdd-8168-41a1-8c0a-df79aea585b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 12:15:24.842759  295274 system_pods.go:89] "csi-hostpathplugin-qx76c" [06a5da30-8f06-481f-b8d9-f7c68e9dc1a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 12:15:24.842764  295274 system_pods.go:89] "etcd-addons-694780" [58288863-2f47-4ab4-afeb-15a2a0cc2b72] Running
	I1019 12:15:24.842770  295274 system_pods.go:89] "kindnet-hbjtx" [17a70783-7bb2-4e04-87ff-29e9ae6157ec] Running
	I1019 12:15:24.842774  295274 system_pods.go:89] "kube-apiserver-addons-694780" [b8cf8d39-f915-4a03-b260-b53beeaa93ab] Running
	I1019 12:15:24.842782  295274 system_pods.go:89] "kube-controller-manager-addons-694780" [b9b890a0-4020-4659-a97d-606961e57787] Running
	I1019 12:15:24.842788  295274 system_pods.go:89] "kube-ingress-dns-minikube" [efc9b336-ddb4-4c69-9439-2a2d7435f8fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:15:24.842794  295274 system_pods.go:89] "kube-proxy-g2s4z" [2e13f778-44e4-41ee-b5dd-74ecd5c6ba75] Running
	I1019 12:15:24.842799  295274 system_pods.go:89] "kube-scheduler-addons-694780" [9ea837f2-390c-41a1-a839-836b1e1d5e70] Running
	I1019 12:15:24.842805  295274 system_pods.go:89] "metrics-server-85b7d694d7-qjfpt" [5a14d2c0-b959-4c84-86d6-2921e765a741] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:15:24.842812  295274 system_pods.go:89] "nvidia-device-plugin-daemonset-rl6ct" [1169929a-70c6-44e8-a514-f532fb25a448] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 12:15:24.842819  295274 system_pods.go:89] "registry-6b586f9694-cz995" [13fda06a-9f49-47a0-9b61-d3a6269e5357] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:15:24.842825  295274 system_pods.go:89] "registry-creds-764b6fb674-c7zhl" [13adddb6-d4bf-4eff-8eef-f96cbd11e787] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:15:24.842832  295274 system_pods.go:89] "registry-proxy-4r8wk" [0e1da561-db9d-4edf-ada6-d637df7913be] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 12:15:24.842840  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-slbnx" [a974aadd-de01-4b77-a455-661a00173306] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:24.842848  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tpk9s" [8642e08b-acc7-4205-a1bf-ded7ee16625c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:24.842854  295274 system_pods.go:89] "storage-provisioner" [1608e4fc-9b1c-4b5e-bc5d-d20a14adf01d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:15:24.842871  295274 retry.go:31] will retry after 571.269725ms: missing components: kube-dns
	I1019 12:15:25.010624  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:25.112004  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:25.334437  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:25.345300  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:25.419287  295274 system_pods.go:86] 19 kube-system pods found
	I1019 12:15:25.419326  295274 system_pods.go:89] "coredns-66bc5c9577-pmnfn" [bec1ffaa-adfa-4ec0-8900-094eb23c474c] Running
	I1019 12:15:25.419337  295274 system_pods.go:89] "csi-hostpath-attacher-0" [f8da0a80-81fe-45d9-9bc4-546a88956349] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 12:15:25.419346  295274 system_pods.go:89] "csi-hostpath-resizer-0" [c8b31bdd-8168-41a1-8c0a-df79aea585b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 12:15:25.419353  295274 system_pods.go:89] "csi-hostpathplugin-qx76c" [06a5da30-8f06-481f-b8d9-f7c68e9dc1a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 12:15:25.419359  295274 system_pods.go:89] "etcd-addons-694780" [58288863-2f47-4ab4-afeb-15a2a0cc2b72] Running
	I1019 12:15:25.419365  295274 system_pods.go:89] "kindnet-hbjtx" [17a70783-7bb2-4e04-87ff-29e9ae6157ec] Running
	I1019 12:15:25.419369  295274 system_pods.go:89] "kube-apiserver-addons-694780" [b8cf8d39-f915-4a03-b260-b53beeaa93ab] Running
	I1019 12:15:25.419373  295274 system_pods.go:89] "kube-controller-manager-addons-694780" [b9b890a0-4020-4659-a97d-606961e57787] Running
	I1019 12:15:25.419387  295274 system_pods.go:89] "kube-ingress-dns-minikube" [efc9b336-ddb4-4c69-9439-2a2d7435f8fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:15:25.419397  295274 system_pods.go:89] "kube-proxy-g2s4z" [2e13f778-44e4-41ee-b5dd-74ecd5c6ba75] Running
	I1019 12:15:25.419402  295274 system_pods.go:89] "kube-scheduler-addons-694780" [9ea837f2-390c-41a1-a839-836b1e1d5e70] Running
	I1019 12:15:25.419411  295274 system_pods.go:89] "metrics-server-85b7d694d7-qjfpt" [5a14d2c0-b959-4c84-86d6-2921e765a741] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:15:25.419421  295274 system_pods.go:89] "nvidia-device-plugin-daemonset-rl6ct" [1169929a-70c6-44e8-a514-f532fb25a448] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 12:15:25.419427  295274 system_pods.go:89] "registry-6b586f9694-cz995" [13fda06a-9f49-47a0-9b61-d3a6269e5357] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:15:25.419434  295274 system_pods.go:89] "registry-creds-764b6fb674-c7zhl" [13adddb6-d4bf-4eff-8eef-f96cbd11e787] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:15:25.419456  295274 system_pods.go:89] "registry-proxy-4r8wk" [0e1da561-db9d-4edf-ada6-d637df7913be] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 12:15:25.419463  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-slbnx" [a974aadd-de01-4b77-a455-661a00173306] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:25.419475  295274 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tpk9s" [8642e08b-acc7-4205-a1bf-ded7ee16625c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:15:25.419479  295274 system_pods.go:89] "storage-provisioner" [1608e4fc-9b1c-4b5e-bc5d-d20a14adf01d] Running
	I1019 12:15:25.419493  295274 system_pods.go:126] duration metric: took 1.974147977s to wait for k8s-apps to be running ...
	I1019 12:15:25.419503  295274 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:15:25.419562  295274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:15:25.432842  295274 system_svc.go:56] duration metric: took 13.329938ms WaitForService to wait for kubelet
	I1019 12:15:25.432871  295274 kubeadm.go:586] duration metric: took 43.382402791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:15:25.432893  295274 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:15:25.435878  295274 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 12:15:25.435958  295274 node_conditions.go:123] node cpu capacity is 2
	I1019 12:15:25.435972  295274 node_conditions.go:105] duration metric: took 3.072988ms to run NodePressure ...
	I1019 12:15:25.435984  295274 start.go:241] waiting for startup goroutines ...
	I1019 12:15:25.509932  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:25.611637  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:25.834420  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:25.839728  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:26.010605  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:26.112599  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:26.335217  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:26.340378  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:26.510563  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:26.611884  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:26.835238  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:26.840059  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:27.012072  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:27.112925  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:27.334258  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:27.340053  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:27.510561  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:27.611851  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:27.833874  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:27.839477  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:28.009250  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:28.111773  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:28.335025  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:28.339951  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:28.510124  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:28.611891  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:28.834479  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:28.839424  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:29.009531  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:29.111715  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:29.334400  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:29.339832  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:29.510742  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:29.612457  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:29.835752  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:29.840098  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:30.011480  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:30.112981  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:30.334769  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:30.340122  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:30.510423  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:30.612172  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:30.834721  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:30.839449  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:31.009792  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:31.112489  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:31.121125  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:15:31.334636  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:31.339529  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:31.509779  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:31.612475  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:31.838829  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:31.841701  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:32.010777  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:32.112109  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:32.150189  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.028968676s)
	W1019 12:15:32.150221  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:32.150241  295274 retry.go:31] will retry after 12.156396731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:32.335890  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:32.343474  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:32.510071  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:32.611081  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:32.834594  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:32.839808  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:33.010581  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:33.111959  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:33.335207  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:33.340456  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:33.509940  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:33.612220  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:33.834697  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:33.839350  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:34.010654  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:34.112737  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:34.334511  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:34.339353  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:34.510260  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:34.611426  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:34.834899  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:34.839829  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:35.011614  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:35.112344  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:35.334460  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:35.348661  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:35.509841  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:35.612251  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:35.835671  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:35.839361  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:36.011491  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:36.112375  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:36.337856  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:36.341710  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:36.510019  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:36.611732  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:36.835571  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:36.840129  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:37.011071  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:37.111709  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:37.335071  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:37.340334  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:37.510593  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:37.612029  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:37.834468  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:37.839337  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:38.010949  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:38.111810  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:38.334228  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:38.340575  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:38.510264  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:38.611534  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:38.835369  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:38.839050  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:39.010546  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:39.111864  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:39.348966  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:39.349383  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:39.511027  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:39.611453  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:39.835182  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:39.840068  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:40.015734  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:40.111999  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:40.339474  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:40.340629  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:40.509732  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:40.612473  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:40.835108  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:40.840417  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:41.012483  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:41.111276  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:41.334625  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:41.343441  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:41.510687  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:41.612429  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:41.834846  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:41.839924  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:42.011700  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:42.114176  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:42.335966  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:42.340112  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:42.510750  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:42.613030  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:42.834639  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:42.839482  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:43.011021  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:43.112195  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:43.334724  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:43.339609  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:43.510818  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:43.611843  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:43.835564  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:43.839249  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:44.010535  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:44.112249  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:44.307694  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:15:44.334719  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:44.339771  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:44.510370  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:44.611881  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:44.835427  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:44.840133  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:45.010392  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:45.111871  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:45.341037  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:45.343670  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:45.414612  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.106822707s)
	W1019 12:15:45.414701  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:45.414736  295274 retry.go:31] will retry after 22.883577744s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:15:45.509866  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:45.612244  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:45.834781  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:45.839708  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:46.010497  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:46.112377  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:46.335483  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:46.342369  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:46.510770  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:46.612451  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:46.835044  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:46.840088  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:47.010670  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:47.112454  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:47.334841  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:47.339949  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:47.510319  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:47.612165  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:47.834145  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:47.840467  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:48.011383  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:48.111863  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:48.334283  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:48.339687  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:48.524545  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:48.611942  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:48.835225  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:48.843026  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:49.017738  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:49.118838  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:49.363825  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:49.364010  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:49.514378  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:49.614652  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:49.835411  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:49.839256  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:50.012650  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:50.112553  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:50.335597  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:50.339995  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:50.516350  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:50.611481  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:50.834543  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:50.839802  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:51.010420  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:51.112127  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:51.387961  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:51.388347  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:51.511581  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:51.612134  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:51.834600  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:51.839996  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:52.010709  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:52.112190  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:52.335100  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:52.340248  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:52.511029  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:52.611890  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:52.835348  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:52.839660  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:53.009887  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:53.111314  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:53.335115  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:53.340343  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:53.509998  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:53.611724  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:53.835958  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:53.840259  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:54.012632  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:54.151352  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:54.335646  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:54.340925  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:54.510636  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:54.612386  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:54.835029  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:54.840560  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:55.010572  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:55.112039  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:55.334215  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:55.340252  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:55.510202  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:55.611678  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:55.835403  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:55.839403  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:56.009563  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:56.111722  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:56.335303  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:56.338697  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:56.510014  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:56.611666  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:56.835017  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:56.840019  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:57.011252  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:57.111631  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:57.335119  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:57.340138  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:57.510539  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:57.611765  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:57.834963  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:57.839774  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:58.010060  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:58.111227  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:58.334467  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:58.339712  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:58.510419  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:58.612355  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:58.842645  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:58.842732  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:59.010152  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:59.113611  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:59.334580  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:59.339313  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:15:59.512453  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:15:59.612121  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:15:59.834602  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:15:59.839505  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:00.011576  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:00.126618  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:00.336448  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:00.348181  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:00.511138  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:00.611722  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:00.834937  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:00.839822  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:01.009999  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:01.111727  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:01.334337  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:01.340901  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:01.510279  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:01.611745  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:01.834708  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:01.839686  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:02.010881  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:02.111749  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:02.335272  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:02.340199  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:02.510509  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:02.612216  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:02.834560  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:02.839146  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:03.010520  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:03.111503  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:03.334713  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:03.339834  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:03.509634  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:03.611758  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:03.834558  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:03.838947  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:04.010349  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:04.111679  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:04.335066  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:04.340075  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:04.510611  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:04.612260  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:04.834669  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:04.839630  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:05.010184  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:05.111848  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:05.335276  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:05.338958  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:05.518404  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:05.611459  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:05.834936  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:05.839677  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:06.010896  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:06.112226  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:06.334654  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:06.340664  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:06.510120  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:06.614683  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:06.835597  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:06.839995  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:07.010136  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:07.112719  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:07.337776  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:07.347081  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:07.515756  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:07.632539  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:07.846506  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:07.847292  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:08.015674  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:08.116093  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:08.299199  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:16:08.340342  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:08.344472  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:08.516955  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:08.619581  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:08.835511  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:08.840333  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:09.012689  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:09.114188  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:09.335198  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:09.340266  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:09.431421  295274 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.132126526s)
	W1019 12:16:09.431460  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:16:09.431498  295274 retry.go:31] will retry after 32.744760924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:16:09.514140  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:09.611601  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:09.834378  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:09.839190  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:10.019000  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:10.112234  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:10.334895  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:10.339303  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:10.511133  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:10.611602  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:10.834738  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:10.839557  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:11.010363  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:11.112626  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:11.335022  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:11.340828  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:11.510113  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:11.618621  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:11.835306  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:11.839241  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:12.009940  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:12.111188  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:12.335953  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:12.344104  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:12.510739  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:12.612189  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:12.834987  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:12.839718  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:13.009863  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:13.113230  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:13.335818  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:13.339354  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:16:13.509240  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:13.612238  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:13.835150  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:13.840258  295274 kapi.go:107] duration metric: took 1m25.503924007s to wait for kubernetes.io/minikube-addons=registry ...
	I1019 12:16:14.011359  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:14.112047  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:14.333916  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:14.510125  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:14.611595  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:14.834540  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:15.009724  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:15.112102  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:15.333938  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:15.509941  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:15.611663  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:15.834756  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:16.009712  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:16.111597  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:16.334495  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:16.509988  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:16.611003  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:16.834291  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:17.009624  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:17.111651  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:17.333740  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:17.509820  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:17.611433  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:17.835061  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:18.009878  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:18.111292  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:18.334620  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:18.511719  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:18.612518  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:18.834776  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:19.010030  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:19.111380  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:19.334578  295274 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:16:19.510151  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:19.611689  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:19.834972  295274 kapi.go:107] duration metric: took 1m31.504099622s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1019 12:16:20.010224  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:20.111167  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:20.509856  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:20.731109  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:21.010476  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:21.111911  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:21.509794  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:21.612483  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:22.010763  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:22.112541  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:22.510278  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:22.611418  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:23.011585  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:23.117607  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:23.511057  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:23.621778  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:24.012497  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:24.112241  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:24.516433  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:24.612114  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:25.010705  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:25.111892  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:25.512418  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:25.611483  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:26.010117  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:26.111598  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:26.510190  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:26.616081  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:27.011003  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:27.111801  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:27.510203  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:27.611631  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:28.009483  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:28.116351  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:28.510523  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:28.612070  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:29.009654  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:29.112029  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:29.510761  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:29.613252  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:30.013796  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:30.113584  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:30.509632  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:30.613044  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:31.010528  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:31.112924  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:31.511497  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:16:31.622650  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:32.011140  295274 kapi.go:107] duration metric: took 1m40.504611017s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1019 12:16:32.062201  295274 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-694780 cluster.
	I1019 12:16:32.093979  295274 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1019 12:16:32.111474  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:32.151876  295274 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1019 12:16:32.612031  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:33.112224  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:33.615957  295274 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:16:34.112405  295274 kapi.go:107] duration metric: took 1m45.504479047s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1019 12:16:42.177237  295274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1019 12:16:43.001174  295274 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 12:16:43.001279  295274 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1019 12:16:43.005385  295274 out.go:179] * Enabled addons: amd-gpu-device-plugin, ingress-dns, cloud-spanner, registry-creds, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1019 12:16:43.008345  295274 addons.go:514] duration metric: took 2m0.958670498s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns cloud-spanner registry-creds storage-provisioner nvidia-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1019 12:16:43.008415  295274 start.go:246] waiting for cluster config update ...
	I1019 12:16:43.008444  295274 start.go:255] writing updated cluster config ...
	I1019 12:16:43.008777  295274 ssh_runner.go:195] Run: rm -f paused
	I1019 12:16:43.013485  295274 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:16:43.017923  295274 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pmnfn" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:43.024726  295274 pod_ready.go:94] pod "coredns-66bc5c9577-pmnfn" is "Ready"
	I1019 12:16:43.024757  295274 pod_ready.go:86] duration metric: took 6.756337ms for pod "coredns-66bc5c9577-pmnfn" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:43.027523  295274 pod_ready.go:83] waiting for pod "etcd-addons-694780" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:43.032691  295274 pod_ready.go:94] pod "etcd-addons-694780" is "Ready"
	I1019 12:16:43.032719  295274 pod_ready.go:86] duration metric: took 5.167491ms for pod "etcd-addons-694780" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:43.035194  295274 pod_ready.go:83] waiting for pod "kube-apiserver-addons-694780" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:43.040128  295274 pod_ready.go:94] pod "kube-apiserver-addons-694780" is "Ready"
	I1019 12:16:43.040159  295274 pod_ready.go:86] duration metric: took 4.938679ms for pod "kube-apiserver-addons-694780" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:43.042836  295274 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-694780" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:43.417081  295274 pod_ready.go:94] pod "kube-controller-manager-addons-694780" is "Ready"
	I1019 12:16:43.417114  295274 pod_ready.go:86] duration metric: took 374.247577ms for pod "kube-controller-manager-addons-694780" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:43.619317  295274 pod_ready.go:83] waiting for pod "kube-proxy-g2s4z" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:44.017652  295274 pod_ready.go:94] pod "kube-proxy-g2s4z" is "Ready"
	I1019 12:16:44.017752  295274 pod_ready.go:86] duration metric: took 398.402857ms for pod "kube-proxy-g2s4z" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:44.218532  295274 pod_ready.go:83] waiting for pod "kube-scheduler-addons-694780" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:44.617541  295274 pod_ready.go:94] pod "kube-scheduler-addons-694780" is "Ready"
	I1019 12:16:44.617571  295274 pod_ready.go:86] duration metric: took 399.002717ms for pod "kube-scheduler-addons-694780" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:16:44.617584  295274 pod_ready.go:40] duration metric: took 1.604062784s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:16:44.689461  295274 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 12:16:44.692767  295274 out.go:179] * Done! kubectl is now configured to use "addons-694780" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 12:16:32 addons-694780 crio[831]: time="2025-10-19T12:16:32.891333999Z" level=info msg="Created container babbcf90f6ac904ad2f1c25a59f3fae6037578ff6c985c97854cc8e67861c441: kube-system/csi-hostpathplugin-qx76c/csi-snapshotter" id=314b7e47-085f-4e41-b26b-7a5699980ca5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:16:32 addons-694780 crio[831]: time="2025-10-19T12:16:32.893829731Z" level=info msg="Starting container: babbcf90f6ac904ad2f1c25a59f3fae6037578ff6c985c97854cc8e67861c441" id=5d11f493-a023-491b-bdc2-4e722ce1ea6e name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:16:32 addons-694780 crio[831]: time="2025-10-19T12:16:32.897606556Z" level=info msg="Started container" PID=4958 containerID=babbcf90f6ac904ad2f1c25a59f3fae6037578ff6c985c97854cc8e67861c441 description=kube-system/csi-hostpathplugin-qx76c/csi-snapshotter id=5d11f493-a023-491b-bdc2-4e722ce1ea6e name=/runtime.v1.RuntimeService/StartContainer sandboxID=d665ebdf82843c4696fc68c4ba9330b265f902ec4188c0415f557e3b278006fd
	Oct 19 12:16:46 addons-694780 crio[831]: time="2025-10-19T12:16:46.132032106Z" level=info msg="Running pod sandbox: default/busybox/POD" id=73add0fd-62e7-4a5a-9849-510cbdc2b292 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:16:46 addons-694780 crio[831]: time="2025-10-19T12:16:46.132121362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:16:46 addons-694780 crio[831]: time="2025-10-19T12:16:46.139527424Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8aa3fa57da3dba0b6376856ede0a23c0a15271062598d44125b600e7b3426ed8 UID:ad68bc25-4243-4208-ae68-a37db2558acc NetNS:/var/run/netns/d2d5d149-baad-40fc-9d7f-771fdeb66dbb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400242f080}] Aliases:map[]}"
	Oct 19 12:16:46 addons-694780 crio[831]: time="2025-10-19T12:16:46.139568016Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 19 12:16:46 addons-694780 crio[831]: time="2025-10-19T12:16:46.152018892Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8aa3fa57da3dba0b6376856ede0a23c0a15271062598d44125b600e7b3426ed8 UID:ad68bc25-4243-4208-ae68-a37db2558acc NetNS:/var/run/netns/d2d5d149-baad-40fc-9d7f-771fdeb66dbb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400242f080}] Aliases:map[]}"
	Oct 19 12:16:46 addons-694780 crio[831]: time="2025-10-19T12:16:46.152222259Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 19 12:16:46 addons-694780 crio[831]: time="2025-10-19T12:16:46.15558572Z" level=info msg="Ran pod sandbox 8aa3fa57da3dba0b6376856ede0a23c0a15271062598d44125b600e7b3426ed8 with infra container: default/busybox/POD" id=73add0fd-62e7-4a5a-9849-510cbdc2b292 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:16:46 addons-694780 crio[831]: time="2025-10-19T12:16:46.15675468Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4b77ce1c-f80a-4840-b04e-5aeebae0d5ea name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:16:46 addons-694780 crio[831]: time="2025-10-19T12:16:46.156907864Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4b77ce1c-f80a-4840-b04e-5aeebae0d5ea name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:16:46 addons-694780 crio[831]: time="2025-10-19T12:16:46.156957825Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=4b77ce1c-f80a-4840-b04e-5aeebae0d5ea name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:16:46 addons-694780 crio[831]: time="2025-10-19T12:16:46.160512682Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5faa7076-63e2-48bf-b4a4-ff180fd3f378 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:16:46 addons-694780 crio[831]: time="2025-10-19T12:16:46.162975765Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 12:16:48 addons-694780 crio[831]: time="2025-10-19T12:16:48.260480298Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=5faa7076-63e2-48bf-b4a4-ff180fd3f378 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:16:48 addons-694780 crio[831]: time="2025-10-19T12:16:48.261360745Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f873c570-d826-4c4d-8605-359b8f54771b name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:16:48 addons-694780 crio[831]: time="2025-10-19T12:16:48.264552356Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=028c1995-5f7d-4269-975b-907d744abdb6 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:16:48 addons-694780 crio[831]: time="2025-10-19T12:16:48.271936625Z" level=info msg="Creating container: default/busybox/busybox" id=62cf0f5b-c169-4cdb-8f41-5bfb5138bb1e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:16:48 addons-694780 crio[831]: time="2025-10-19T12:16:48.272785712Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:16:48 addons-694780 crio[831]: time="2025-10-19T12:16:48.279277849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:16:48 addons-694780 crio[831]: time="2025-10-19T12:16:48.279767128Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:16:48 addons-694780 crio[831]: time="2025-10-19T12:16:48.295523232Z" level=info msg="Created container 88e7946bdf82d7adff0e393dc33fcc8c8651b816e924de5b6a46241b25d6afff: default/busybox/busybox" id=62cf0f5b-c169-4cdb-8f41-5bfb5138bb1e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:16:48 addons-694780 crio[831]: time="2025-10-19T12:16:48.296461855Z" level=info msg="Starting container: 88e7946bdf82d7adff0e393dc33fcc8c8651b816e924de5b6a46241b25d6afff" id=1a10c369-7821-43e4-b1dd-172fe3aa16fe name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:16:48 addons-694780 crio[831]: time="2025-10-19T12:16:48.299780696Z" level=info msg="Started container" PID=5094 containerID=88e7946bdf82d7adff0e393dc33fcc8c8651b816e924de5b6a46241b25d6afff description=default/busybox/busybox id=1a10c369-7821-43e4-b1dd-172fe3aa16fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=8aa3fa57da3dba0b6376856ede0a23c0a15271062598d44125b600e7b3426ed8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	88e7946bdf82d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   8aa3fa57da3db       busybox                                     default
	babbcf90f6ac9       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          24 seconds ago       Running             csi-snapshotter                          0                   d665ebdf82843       csi-hostpathplugin-qx76c                    kube-system
	9f6526183c819       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 25 seconds ago       Running             gcp-auth                                 0                   19ad011d1176a       gcp-auth-78565c9fb4-cdmqg                   gcp-auth
	4af26279aa6f2       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          29 seconds ago       Running             csi-provisioner                          0                   d665ebdf82843       csi-hostpathplugin-qx76c                    kube-system
	dbc7b2d7b48c2       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            30 seconds ago       Running             liveness-probe                           0                   d665ebdf82843       csi-hostpathplugin-qx76c                    kube-system
	53d99b9c1fa5a       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           31 seconds ago       Running             hostpath                                 0                   d665ebdf82843       csi-hostpathplugin-qx76c                    kube-system
	1159fff2343a5       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                33 seconds ago       Running             node-driver-registrar                    0                   d665ebdf82843       csi-hostpathplugin-qx76c                    kube-system
	dd7b562cc0cf0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            34 seconds ago       Running             gadget                                   0                   6a7cf144efa51       gadget-qqrhf                                gadget
	c93e1337cec3a       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             38 seconds ago       Running             controller                               0                   fa5aa62abc378       ingress-nginx-controller-675c5ddd98-5qr44   ingress-nginx
	976c559427e02       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              44 seconds ago       Running             registry-proxy                           0                   54115c16633c7       registry-proxy-4r8wk                        kube-system
	20da76bbf7724       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   48 seconds ago       Exited              patch                                    0                   c0793ad0a80bc       ingress-nginx-admission-patch-s49m4         ingress-nginx
	4e8fe40f4a508       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   48 seconds ago       Running             csi-external-health-monitor-controller   0                   d665ebdf82843       csi-hostpathplugin-qx76c                    kube-system
	493d4e6052927       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             49 seconds ago       Exited              patch                                    2                   452b87d32c116       gcp-auth-certs-patch-95m57                  gcp-auth
	3c758f6c5602f       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        50 seconds ago       Running             metrics-server                           0                   e1476dbecedaf       metrics-server-85b7d694d7-qjfpt             kube-system
	3c320bc0124b5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   52 seconds ago       Exited              create                                   0                   1eebe84fed1ea       gcp-auth-certs-create-kwq5c                 gcp-auth
	c93fad6f2f681       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     52 seconds ago       Running             nvidia-device-plugin-ctr                 0                   2262fecea932f       nvidia-device-plugin-daemonset-rl6ct        kube-system
	d66a0ce31c46f       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   43d6f94bd7955       registry-6b586f9694-cz995                   kube-system
	019ec1d7cee73       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   dbc2f931b1b2b       kube-ingress-dns-minikube                   kube-system
	82514a9622aa2       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   821c63b750b1b       local-path-provisioner-648f6765c9-n4zsd     local-path-storage
	ad7a2781a873f       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   0f0fb7349b5ee       snapshot-controller-7d9fbc56b8-slbnx        kube-system
	fc02e62488e86       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   03be8b8b8c1a4       yakd-dashboard-5ff678cb9-wwfqw              yakd-dashboard
	80882ef14df04       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   f345e92731737       csi-hostpath-attacher-0                     kube-system
	795c9019de222       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   fb2559abc3d3c       snapshot-controller-7d9fbc56b8-tpk9s        kube-system
	714974313acc5       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   f6cb2f3c02890       cloud-spanner-emulator-86bd5cbb97-6nxrn     default
	da425ec8726de       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   b0dcfac982b64       ingress-nginx-admission-create-tcxc5        ingress-nginx
	1a89d3feb3cc1       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   b1311b5a7abcc       csi-hostpath-resizer-0                      kube-system
	c1af9139ef29a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   fcba510a1debe       storage-provisioner                         kube-system
	c10333b42245b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   9c546dccc3b1c       coredns-66bc5c9577-pmnfn                    kube-system
	0e8ae7e9978df       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   0662f50b0bed2       kube-proxy-g2s4z                            kube-system
	1fbbdaf72898f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   5abad85dc189a       kindnet-hbjtx                               kube-system
	20700ce554fde       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   9192c2ce84035       kube-scheduler-addons-694780                kube-system
	ebc110500cd3d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   3e50fb8e31c60       etcd-addons-694780                          kube-system
	4b12dbb529374       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   0b2fd3ce2345b       kube-controller-manager-addons-694780       kube-system
	974f057716664       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   c3234c3fb92e1       kube-apiserver-addons-694780                kube-system
	
	
	==> coredns [c10333b42245b14943c5c33809857b909c2a03945bf30eedb9643814fdd3b23d] <==
	[INFO] 10.244.0.12:44018 - 64708 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000150484s
	[INFO] 10.244.0.12:44018 - 27522 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002414705s
	[INFO] 10.244.0.12:44018 - 26937 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002469565s
	[INFO] 10.244.0.12:44018 - 24715 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000225743s
	[INFO] 10.244.0.12:44018 - 38521 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00029849s
	[INFO] 10.244.0.12:40245 - 19278 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000176355s
	[INFO] 10.244.0.12:40245 - 19081 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000081913s
	[INFO] 10.244.0.12:39800 - 13430 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091349s
	[INFO] 10.244.0.12:39800 - 13217 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000076268s
	[INFO] 10.244.0.12:50239 - 42997 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084038s
	[INFO] 10.244.0.12:50239 - 42536 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000098077s
	[INFO] 10.244.0.12:39746 - 2216 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001504309s
	[INFO] 10.244.0.12:39746 - 2405 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001082682s
	[INFO] 10.244.0.12:37177 - 7739 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000099013s
	[INFO] 10.244.0.12:37177 - 7551 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000202826s
	[INFO] 10.244.0.21:43894 - 59324 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000187925s
	[INFO] 10.244.0.21:56054 - 40855 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000414898s
	[INFO] 10.244.0.21:60374 - 45139 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159124s
	[INFO] 10.244.0.21:41277 - 20955 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136199s
	[INFO] 10.244.0.21:54940 - 12643 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013487s
	[INFO] 10.244.0.21:33728 - 52877 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117318s
	[INFO] 10.244.0.21:39460 - 38870 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003109295s
	[INFO] 10.244.0.21:50334 - 52949 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003441271s
	[INFO] 10.244.0.21:40438 - 53530 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000597465s
	[INFO] 10.244.0.21:52860 - 34637 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002178419s
	
	
	==> describe nodes <==
	Name:               addons-694780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-694780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=addons-694780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_14_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-694780
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-694780"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:14:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-694780
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:16:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:16:39 +0000   Sun, 19 Oct 2025 12:14:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:16:39 +0000   Sun, 19 Oct 2025 12:14:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:16:39 +0000   Sun, 19 Oct 2025 12:14:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:16:39 +0000   Sun, 19 Oct 2025 12:15:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-694780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                899ba98e-c2fa-4cbf-97dc-320d6f52a440
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-86bd5cbb97-6nxrn      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  gadget                      gadget-qqrhf                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  gcp-auth                    gcp-auth-78565c9fb4-cdmqg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-5qr44    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m9s
	  kube-system                 coredns-66bc5c9577-pmnfn                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m15s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 csi-hostpathplugin-qx76c                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 etcd-addons-694780                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m20s
	  kube-system                 kindnet-hbjtx                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-addons-694780                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-controller-manager-addons-694780        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-proxy-g2s4z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-addons-694780                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 metrics-server-85b7d694d7-qjfpt              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m10s
	  kube-system                 nvidia-device-plugin-daemonset-rl6ct         0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 registry-6b586f9694-cz995                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 registry-creds-764b6fb674-c7zhl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 registry-proxy-4r8wk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 snapshot-controller-7d9fbc56b8-slbnx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 snapshot-controller-7d9fbc56b8-tpk9s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  local-path-storage          local-path-provisioner-648f6765c9-n4zsd      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-wwfqw               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m14s  kube-proxy       
	  Normal   Starting                 2m28s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m28s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m28s  kubelet          Node addons-694780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m28s  kubelet          Node addons-694780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m28s  kubelet          Node addons-694780 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m21s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m21s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m21s  kubelet          Node addons-694780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m21s  kubelet          Node addons-694780 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m21s  kubelet          Node addons-694780 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m17s  node-controller  Node addons-694780 event: Registered Node addons-694780 in Controller
	  Normal   NodeReady                95s    kubelet          Node addons-694780 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct19 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015448] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.491491] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034667] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806219] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.239480] kauditd_printk_skb: 36 callbacks suppressed
	[Oct19 11:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct19 11:24] hrtimer: interrupt took 38365015 ns
	[Oct19 12:12] kauditd_printk_skb: 8 callbacks suppressed
	[Oct19 12:14] overlayfs: idmapped layers are currently not supported
	[  +0.068862] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [ebc110500cd3df83646f04053eb6ac2cb475cfd7069d77e04732e6c38ee16e85] <==
	{"level":"warn","ts":"2025-10-19T12:14:32.590818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.621116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.626972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.649200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.666009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.680035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.700249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.718435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.736017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.749280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.769367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.782600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.798647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.824188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.845712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.874218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.896560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:32.914420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:33.047574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:49.012469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:49.022065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:15:10.889081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:15:10.912301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:15:10.935975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:15:10.951892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34194","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [9f6526183c819720f10988ec9064a0726b712bb6b5f9110bc8603baa3f2fd7ed] <==
	2025/10/19 12:16:31 GCP Auth Webhook started!
	2025/10/19 12:16:45 Ready to marshal response ...
	2025/10/19 12:16:45 Ready to write response ...
	2025/10/19 12:16:45 Ready to marshal response ...
	2025/10/19 12:16:45 Ready to write response ...
	2025/10/19 12:16:45 Ready to marshal response ...
	2025/10/19 12:16:45 Ready to write response ...
	
	
	==> kernel <==
	 12:16:57 up  1:59,  0 user,  load average: 2.33, 3.15, 3.51
	Linux addons-694780 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1fbbdaf72898fb8d9d32b6836dde4d8c8bd3aeb32b5e40d0a08e758f67f5eeb9] <==
	E1019 12:15:12.420337       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 12:15:12.420408       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1019 12:15:14.020876       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:15:14.020909       1 metrics.go:72] Registering metrics
	I1019 12:15:14.020978       1 controller.go:711] "Syncing nftables rules"
	I1019 12:15:22.425013       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:15:22.425051       1 main.go:301] handling current node
	I1019 12:15:32.421987       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:15:32.422021       1 main.go:301] handling current node
	I1019 12:15:42.419692       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:15:42.419728       1 main.go:301] handling current node
	I1019 12:15:52.419455       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:15:52.419483       1 main.go:301] handling current node
	I1019 12:16:02.427292       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:16:02.427322       1 main.go:301] handling current node
	I1019 12:16:12.419241       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:16:12.419279       1 main.go:301] handling current node
	I1019 12:16:22.420921       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:16:22.420948       1 main.go:301] handling current node
	I1019 12:16:32.419661       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:16:32.419727       1 main.go:301] handling current node
	I1019 12:16:42.419381       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:16:42.419415       1 main.go:301] handling current node
	I1019 12:16:52.419609       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:16:52.419643       1 main.go:301] handling current node
	
	
	==> kube-apiserver [974f057716664d84b595f63044c6aaf6d840e979157a7453177950977adff06a] <==
	E1019 12:16:08.579445       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1019 12:16:08.579895       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.12.41:443: connect: connection refused" logger="UnhandledError"
	E1019 12:16:08.582126       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.12.41:443: connect: connection refused" logger="UnhandledError"
	E1019 12:16:08.589115       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.12.41:443: connect: connection refused" logger="UnhandledError"
	E1019 12:16:08.610503       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.12.41:443: connect: connection refused" logger="UnhandledError"
	W1019 12:16:09.580235       1 handler_proxy.go:99] no RequestInfo found in the context
	E1019 12:16:09.580292       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1019 12:16:09.580310       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1019 12:16:09.580244       1 handler_proxy.go:99] no RequestInfo found in the context
	E1019 12:16:09.580385       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1019 12:16:09.581512       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1019 12:16:13.660491       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.12.41:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W1019 12:16:13.660989       1 handler_proxy.go:99] no RequestInfo found in the context
	E1019 12:16:13.661034       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1019 12:16:13.709530       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1019 12:16:55.335569       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34552: use of closed network connection
	
	
	==> kube-controller-manager [4b12dbb5293748cac62f0aa74605c7890efe62f72b75cd8622373e2ae02a2e7a] <==
	I1019 12:14:40.904737       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 12:14:40.914453       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:14:40.916080       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 12:14:40.919132       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 12:14:40.919452       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 12:14:40.919904       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 12:14:40.919932       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 12:14:40.920244       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 12:14:40.920362       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 12:14:40.920450       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 12:14:40.920472       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 12:14:40.920507       1 shared_informer.go:356] "Caches are synced" controller="job"
	E1019 12:14:47.487546       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1019 12:15:10.880374       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1019 12:15:10.880533       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1019 12:15:10.880577       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1019 12:15:10.923873       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1019 12:15:10.927928       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1019 12:15:10.981320       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:15:11.029115       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:15:25.874683       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1019 12:15:40.990831       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1019 12:15:41.038488       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1019 12:16:10.996426       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1019 12:16:11.046584       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [0e8ae7e9978df10dd5c1ae839fb322082252d2948bb1e640b22d86f207cac350] <==
	I1019 12:14:42.631676       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:14:42.908366       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:14:43.010722       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:14:43.010756       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 12:14:43.010848       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:14:43.044835       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:14:43.044885       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:14:43.056942       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:14:43.058204       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:14:43.058231       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:14:43.070716       1 config.go:200] "Starting service config controller"
	I1019 12:14:43.070743       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:14:43.070762       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:14:43.070767       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:14:43.070794       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:14:43.070800       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:14:43.072800       1 config.go:309] "Starting node config controller"
	I1019 12:14:43.072825       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:14:43.072836       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:14:43.171439       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:14:43.171481       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 12:14:43.171522       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [20700ce554fdeeb461937fe8bd8c17a66655f95c7782ad23f8855f6fc85e921d] <==
	I1019 12:14:34.440163       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:14:34.442276       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:14:34.442381       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:14:34.442644       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 12:14:34.442754       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1019 12:14:34.453243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 12:14:34.453438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 12:14:34.453526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 12:14:34.453648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 12:14:34.453897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 12:14:34.453995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 12:14:34.454103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 12:14:34.454195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 12:14:34.454305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 12:14:34.454441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1019 12:14:34.454792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 12:14:34.454901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 12:14:34.455058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 12:14:34.455284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 12:14:34.455371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 12:14:34.455537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 12:14:34.457179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 12:14:34.457360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 12:14:34.457475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1019 12:14:35.643009       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:16:09 addons-694780 kubelet[1297]: I1019 12:16:09.694440    1297 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x59pj\" (UniqueName: \"kubernetes.io/projected/1ceb17a6-08a4-400a-abf0-baa77c4478a0-kube-api-access-x59pj\") pod \"1ceb17a6-08a4-400a-abf0-baa77c4478a0\" (UID: \"1ceb17a6-08a4-400a-abf0-baa77c4478a0\") "
	Oct 19 12:16:09 addons-694780 kubelet[1297]: I1019 12:16:09.698348    1297 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ceb17a6-08a4-400a-abf0-baa77c4478a0-kube-api-access-x59pj" (OuterVolumeSpecName: "kube-api-access-x59pj") pod "1ceb17a6-08a4-400a-abf0-baa77c4478a0" (UID: "1ceb17a6-08a4-400a-abf0-baa77c4478a0"). InnerVolumeSpecName "kube-api-access-x59pj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 19 12:16:09 addons-694780 kubelet[1297]: I1019 12:16:09.795416    1297 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x59pj\" (UniqueName: \"kubernetes.io/projected/1ceb17a6-08a4-400a-abf0-baa77c4478a0-kube-api-access-x59pj\") on node \"addons-694780\" DevicePath \"\""
	Oct 19 12:16:10 addons-694780 kubelet[1297]: I1019 12:16:10.515600    1297 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="452b87d32c1162fd8f1006f4c2f8103762520a8b0ace7d5c0a0ec8632f49324b"
	Oct 19 12:16:10 addons-694780 kubelet[1297]: I1019 12:16:10.701105    1297 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhcmr\" (UniqueName: \"kubernetes.io/projected/e134a4ef-baed-44f1-84a7-4ed6c17765dc-kube-api-access-dhcmr\") pod \"e134a4ef-baed-44f1-84a7-4ed6c17765dc\" (UID: \"e134a4ef-baed-44f1-84a7-4ed6c17765dc\") "
	Oct 19 12:16:10 addons-694780 kubelet[1297]: I1019 12:16:10.706010    1297 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e134a4ef-baed-44f1-84a7-4ed6c17765dc-kube-api-access-dhcmr" (OuterVolumeSpecName: "kube-api-access-dhcmr") pod "e134a4ef-baed-44f1-84a7-4ed6c17765dc" (UID: "e134a4ef-baed-44f1-84a7-4ed6c17765dc"). InnerVolumeSpecName "kube-api-access-dhcmr". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 19 12:16:10 addons-694780 kubelet[1297]: I1019 12:16:10.801521    1297 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dhcmr\" (UniqueName: \"kubernetes.io/projected/e134a4ef-baed-44f1-84a7-4ed6c17765dc-kube-api-access-dhcmr\") on node \"addons-694780\" DevicePath \"\""
	Oct 19 12:16:11 addons-694780 kubelet[1297]: I1019 12:16:11.525670    1297 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0793ad0a80bce8da6e90fe3eb95b63b3eec8fcf64c0c3f86d61175d21b6105b"
	Oct 19 12:16:13 addons-694780 kubelet[1297]: I1019 12:16:13.539417    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-4r8wk" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:16:13 addons-694780 kubelet[1297]: I1019 12:16:13.564761    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-4r8wk" podStartSLOduration=2.670715678 podStartE2EDuration="51.56474055s" podCreationTimestamp="2025-10-19 12:15:22 +0000 UTC" firstStartedPulling="2025-10-19 12:15:24.178242554 +0000 UTC m=+47.534037211" lastFinishedPulling="2025-10-19 12:16:13.072267426 +0000 UTC m=+96.428062083" observedRunningTime="2025-10-19 12:16:13.560079257 +0000 UTC m=+96.915873930" watchObservedRunningTime="2025-10-19 12:16:13.56474055 +0000 UTC m=+96.920535207"
	Oct 19 12:16:14 addons-694780 kubelet[1297]: I1019 12:16:14.546888    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-4r8wk" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:16:23 addons-694780 kubelet[1297]: I1019 12:16:23.622690    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-5qr44" podStartSLOduration=48.679463975 podStartE2EDuration="1m35.622672684s" podCreationTimestamp="2025-10-19 12:14:48 +0000 UTC" firstStartedPulling="2025-10-19 12:15:32.245814542 +0000 UTC m=+55.601609199" lastFinishedPulling="2025-10-19 12:16:19.189023161 +0000 UTC m=+102.544817908" observedRunningTime="2025-10-19 12:16:19.586161111 +0000 UTC m=+102.941955776" watchObservedRunningTime="2025-10-19 12:16:23.622672684 +0000 UTC m=+106.978467349"
	Oct 19 12:16:26 addons-694780 kubelet[1297]: I1019 12:16:26.657468    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-qqrhf" podStartSLOduration=68.385254697 podStartE2EDuration="1m39.657441511s" podCreationTimestamp="2025-10-19 12:14:47 +0000 UTC" firstStartedPulling="2025-10-19 12:15:51.526803859 +0000 UTC m=+74.882598516" lastFinishedPulling="2025-10-19 12:16:22.798990673 +0000 UTC m=+106.154785330" observedRunningTime="2025-10-19 12:16:23.62849218 +0000 UTC m=+106.984286845" watchObservedRunningTime="2025-10-19 12:16:26.657441511 +0000 UTC m=+110.013236168"
	Oct 19 12:16:26 addons-694780 kubelet[1297]: I1019 12:16:26.959799    1297 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 19 12:16:26 addons-694780 kubelet[1297]: I1019 12:16:26.960408    1297 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 19 12:16:26 addons-694780 kubelet[1297]: E1019 12:16:26.980867    1297 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 19 12:16:26 addons-694780 kubelet[1297]: E1019 12:16:26.980962    1297 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13adddb6-d4bf-4eff-8eef-f96cbd11e787-gcr-creds podName:13adddb6-d4bf-4eff-8eef-f96cbd11e787 nodeName:}" failed. No retries permitted until 2025-10-19 12:17:30.980943648 +0000 UTC m=+174.336738305 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/13adddb6-d4bf-4eff-8eef-f96cbd11e787-gcr-creds") pod "registry-creds-764b6fb674-c7zhl" (UID: "13adddb6-d4bf-4eff-8eef-f96cbd11e787") : secret "registry-creds-gcr" not found
	Oct 19 12:16:27 addons-694780 kubelet[1297]: W1019 12:16:27.261088    1297 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1204b177504834de2bad5ed03ffce4ec658a2a7b627e21eea9f07b8d50fe34f6/crio-19ad011d1176afe69ba3c9037be594d6ec6a50e39674b3e6a9fc4fb3d1d1501d WatchSource:0}: Error finding container 19ad011d1176afe69ba3c9037be594d6ec6a50e39674b3e6a9fc4fb3d1d1501d: Status 404 returned error can't find the container with id 19ad011d1176afe69ba3c9037be594d6ec6a50e39674b3e6a9fc4fb3d1d1501d
	Oct 19 12:16:33 addons-694780 kubelet[1297]: I1019 12:16:33.682817    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-cdmqg" podStartSLOduration=98.611989815 podStartE2EDuration="1m42.682796068s" podCreationTimestamp="2025-10-19 12:14:51 +0000 UTC" firstStartedPulling="2025-10-19 12:16:27.267467956 +0000 UTC m=+110.623262613" lastFinishedPulling="2025-10-19 12:16:31.338274201 +0000 UTC m=+114.694068866" observedRunningTime="2025-10-19 12:16:31.677363439 +0000 UTC m=+115.033158120" watchObservedRunningTime="2025-10-19 12:16:33.682796068 +0000 UTC m=+117.038590725"
	Oct 19 12:16:37 addons-694780 kubelet[1297]: I1019 12:16:37.043407    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-qx76c" podStartSLOduration=6.236984978 podStartE2EDuration="1m15.043385372s" podCreationTimestamp="2025-10-19 12:15:22 +0000 UTC" firstStartedPulling="2025-10-19 12:15:24.039605199 +0000 UTC m=+47.395399864" lastFinishedPulling="2025-10-19 12:16:32.846005601 +0000 UTC m=+116.201800258" observedRunningTime="2025-10-19 12:16:33.683888449 +0000 UTC m=+117.039683114" watchObservedRunningTime="2025-10-19 12:16:37.043385372 +0000 UTC m=+120.399180028"
	Oct 19 12:16:38 addons-694780 kubelet[1297]: I1019 12:16:38.769919    1297 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f3be0a6-6daa-4fdf-a60f-b0cf71ca5112" path="/var/lib/kubelet/pods/2f3be0a6-6daa-4fdf-a60f-b0cf71ca5112/volumes"
	Oct 19 12:16:40 addons-694780 kubelet[1297]: I1019 12:16:40.770092    1297 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ceb17a6-08a4-400a-abf0-baa77c4478a0" path="/var/lib/kubelet/pods/1ceb17a6-08a4-400a-abf0-baa77c4478a0/volumes"
	Oct 19 12:16:45 addons-694780 kubelet[1297]: I1019 12:16:45.941197    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ad68bc25-4243-4208-ae68-a37db2558acc-gcp-creds\") pod \"busybox\" (UID: \"ad68bc25-4243-4208-ae68-a37db2558acc\") " pod="default/busybox"
	Oct 19 12:16:45 addons-694780 kubelet[1297]: I1019 12:16:45.941764    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px2m6\" (UniqueName: \"kubernetes.io/projected/ad68bc25-4243-4208-ae68-a37db2558acc-kube-api-access-px2m6\") pod \"busybox\" (UID: \"ad68bc25-4243-4208-ae68-a37db2558acc\") " pod="default/busybox"
	Oct 19 12:16:46 addons-694780 kubelet[1297]: W1019 12:16:46.154366    1297 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1204b177504834de2bad5ed03ffce4ec658a2a7b627e21eea9f07b8d50fe34f6/crio-8aa3fa57da3dba0b6376856ede0a23c0a15271062598d44125b600e7b3426ed8 WatchSource:0}: Error finding container 8aa3fa57da3dba0b6376856ede0a23c0a15271062598d44125b600e7b3426ed8: Status 404 returned error can't find the container with id 8aa3fa57da3dba0b6376856ede0a23c0a15271062598d44125b600e7b3426ed8
	
	
	==> storage-provisioner [c1af9139ef29a1d92c70afefa4ebf2ccc782581c328281ec4e2f86b553c3c467] <==
	W1019 12:16:32.568432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:34.572208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:34.576577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:36.579606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:36.586447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:38.589174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:38.593994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:40.596743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:40.601530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:42.605135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:42.610490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:44.614289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:44.625984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:46.628740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:46.640620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:48.644050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:48.649122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:50.651948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:50.658889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:52.662108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:52.666777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:54.670145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:54.676623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:56.684458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:16:56.690714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-694780 -n addons-694780
helpers_test.go:269: (dbg) Run:  kubectl --context addons-694780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-tcxc5 ingress-nginx-admission-patch-s49m4 registry-creds-764b6fb674-c7zhl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-694780 describe pod ingress-nginx-admission-create-tcxc5 ingress-nginx-admission-patch-s49m4 registry-creds-764b6fb674-c7zhl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-694780 describe pod ingress-nginx-admission-create-tcxc5 ingress-nginx-admission-patch-s49m4 registry-creds-764b6fb674-c7zhl: exit status 1 (87.943631ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-tcxc5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-s49m4" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-c7zhl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-694780 describe pod ingress-nginx-admission-create-tcxc5 ingress-nginx-admission-patch-s49m4 registry-creds-764b6fb674-c7zhl: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-694780 addons disable headlamp --alsologtostderr -v=1: exit status 11 (258.171703ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:16:58.803901  301965 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:16:58.804772  301965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:16:58.804787  301965 out.go:374] Setting ErrFile to fd 2...
	I1019 12:16:58.804792  301965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:16:58.805119  301965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:16:58.805454  301965 mustload.go:65] Loading cluster: addons-694780
	I1019 12:16:58.805928  301965 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:16:58.805950  301965 addons.go:606] checking whether the cluster is paused
	I1019 12:16:58.806090  301965 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:16:58.806131  301965 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:16:58.806618  301965 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:16:58.824113  301965 ssh_runner.go:195] Run: systemctl --version
	I1019 12:16:58.824174  301965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:16:58.840877  301965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:16:58.944108  301965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:16:58.944204  301965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:16:58.977437  301965 cri.go:89] found id: "babbcf90f6ac904ad2f1c25a59f3fae6037578ff6c985c97854cc8e67861c441"
	I1019 12:16:58.977465  301965 cri.go:89] found id: "4af26279aa6f2eea07df4d6dc9cb321abc82026e12a909ecae07903da4702995"
	I1019 12:16:58.977470  301965 cri.go:89] found id: "dbc7b2d7b48c237972a607237cc3947e59c7b5443b011c68106e9c392f0d975d"
	I1019 12:16:58.977474  301965 cri.go:89] found id: "53d99b9c1fa5a95c9e37f1a45f62cd51c1afdda905ffeb6b9248b13034343462"
	I1019 12:16:58.977478  301965 cri.go:89] found id: "1159fff2343a5c3e477dab117e1ff2e1a6416a99cdfb9f1705fbd592646d9832"
	I1019 12:16:58.977481  301965 cri.go:89] found id: "976c559427e0253107c6466d60d473a0039bdf7878194ad5bdaca6966253b26b"
	I1019 12:16:58.977485  301965 cri.go:89] found id: "4e8fe40f4a508cc1d1ac055b2b1bf2c19b1903cd3d5775fc32b7874ac809c0d8"
	I1019 12:16:58.977489  301965 cri.go:89] found id: "3c758f6c5602f3f9b9443ccc165652180df691ad854e4d71ce3f716ff6f9a39b"
	I1019 12:16:58.977493  301965 cri.go:89] found id: "c93fad6f2f68178d142a7ba603152834d1f7d544574f809291adbea8ae600e2a"
	I1019 12:16:58.977500  301965 cri.go:89] found id: "d66a0ce31c46f79abdc4cf890ad0bf9a061e0e382b03fc34c6d7bddbfe74e583"
	I1019 12:16:58.977504  301965 cri.go:89] found id: "019ec1d7cee73b30cbc0eb97d1a28afba7149627fff4e81ca5ad784b17e42ce6"
	I1019 12:16:58.977507  301965 cri.go:89] found id: "ad7a2781a873fe6c6ec31e43f52230ed09385cd447ef4cbd60561041e64afaaf"
	I1019 12:16:58.977510  301965 cri.go:89] found id: "80882ef14df043e6e23e23bb0ae867fdf8b865123d2ff32882a7c44cffea2388"
	I1019 12:16:58.977514  301965 cri.go:89] found id: "795c9019de22203a4870134a91ae2e2344e2a0d9a3c45ee2ca515e2465ef1af7"
	I1019 12:16:58.977517  301965 cri.go:89] found id: "1a89d3feb3cc173765597de5bc7c4a783544a76def605ad64c02aba17ef45ca3"
	I1019 12:16:58.977524  301965 cri.go:89] found id: "c1af9139ef29a1d92c70afefa4ebf2ccc782581c328281ec4e2f86b553c3c467"
	I1019 12:16:58.977531  301965 cri.go:89] found id: "c10333b42245b14943c5c33809857b909c2a03945bf30eedb9643814fdd3b23d"
	I1019 12:16:58.977536  301965 cri.go:89] found id: "0e8ae7e9978df10dd5c1ae839fb322082252d2948bb1e640b22d86f207cac350"
	I1019 12:16:58.977539  301965 cri.go:89] found id: "1fbbdaf72898fb8d9d32b6836dde4d8c8bd3aeb32b5e40d0a08e758f67f5eeb9"
	I1019 12:16:58.977542  301965 cri.go:89] found id: "20700ce554fdeeb461937fe8bd8c17a66655f95c7782ad23f8855f6fc85e921d"
	I1019 12:16:58.977546  301965 cri.go:89] found id: "ebc110500cd3df83646f04053eb6ac2cb475cfd7069d77e04732e6c38ee16e85"
	I1019 12:16:58.977553  301965 cri.go:89] found id: "4b12dbb5293748cac62f0aa74605c7890efe62f72b75cd8622373e2ae02a2e7a"
	I1019 12:16:58.977556  301965 cri.go:89] found id: "974f057716664d84b595f63044c6aaf6d840e979157a7453177950977adff06a"
	I1019 12:16:58.977559  301965 cri.go:89] found id: ""
	I1019 12:16:58.977612  301965 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:16:58.992513  301965 out.go:203] 
	W1019 12:16:58.995490  301965 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:16:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:16:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:16:58.995517  301965 out.go:285] * 
	* 
	W1019 12:16:59.002070  301965 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:16:59.005074  301965 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-694780 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.28s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-6nxrn" [6a8e15af-0079-41a1-bfbe-183fc376a903] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003486316s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-694780 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (270.079324ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:18:12.802137  303892 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:18:12.802917  303892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:18:12.802956  303892 out.go:374] Setting ErrFile to fd 2...
	I1019 12:18:12.802978  303892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:18:12.803249  303892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:18:12.803558  303892 mustload.go:65] Loading cluster: addons-694780
	I1019 12:18:12.803958  303892 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:18:12.803999  303892 addons.go:606] checking whether the cluster is paused
	I1019 12:18:12.804126  303892 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:18:12.804169  303892 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:18:12.804639  303892 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:18:12.823466  303892 ssh_runner.go:195] Run: systemctl --version
	I1019 12:18:12.823528  303892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:18:12.844070  303892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:18:12.948707  303892 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:18:12.948841  303892 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:18:12.983855  303892 cri.go:89] found id: "babbcf90f6ac904ad2f1c25a59f3fae6037578ff6c985c97854cc8e67861c441"
	I1019 12:18:12.983884  303892 cri.go:89] found id: "4af26279aa6f2eea07df4d6dc9cb321abc82026e12a909ecae07903da4702995"
	I1019 12:18:12.983890  303892 cri.go:89] found id: "dbc7b2d7b48c237972a607237cc3947e59c7b5443b011c68106e9c392f0d975d"
	I1019 12:18:12.983893  303892 cri.go:89] found id: "53d99b9c1fa5a95c9e37f1a45f62cd51c1afdda905ffeb6b9248b13034343462"
	I1019 12:18:12.983896  303892 cri.go:89] found id: "1159fff2343a5c3e477dab117e1ff2e1a6416a99cdfb9f1705fbd592646d9832"
	I1019 12:18:12.983900  303892 cri.go:89] found id: "976c559427e0253107c6466d60d473a0039bdf7878194ad5bdaca6966253b26b"
	I1019 12:18:12.983903  303892 cri.go:89] found id: "4e8fe40f4a508cc1d1ac055b2b1bf2c19b1903cd3d5775fc32b7874ac809c0d8"
	I1019 12:18:12.983906  303892 cri.go:89] found id: "3c758f6c5602f3f9b9443ccc165652180df691ad854e4d71ce3f716ff6f9a39b"
	I1019 12:18:12.983909  303892 cri.go:89] found id: "c93fad6f2f68178d142a7ba603152834d1f7d544574f809291adbea8ae600e2a"
	I1019 12:18:12.983947  303892 cri.go:89] found id: "d66a0ce31c46f79abdc4cf890ad0bf9a061e0e382b03fc34c6d7bddbfe74e583"
	I1019 12:18:12.983952  303892 cri.go:89] found id: "019ec1d7cee73b30cbc0eb97d1a28afba7149627fff4e81ca5ad784b17e42ce6"
	I1019 12:18:12.983955  303892 cri.go:89] found id: "ad7a2781a873fe6c6ec31e43f52230ed09385cd447ef4cbd60561041e64afaaf"
	I1019 12:18:12.983958  303892 cri.go:89] found id: "80882ef14df043e6e23e23bb0ae867fdf8b865123d2ff32882a7c44cffea2388"
	I1019 12:18:12.983961  303892 cri.go:89] found id: "795c9019de22203a4870134a91ae2e2344e2a0d9a3c45ee2ca515e2465ef1af7"
	I1019 12:18:12.983964  303892 cri.go:89] found id: "1a89d3feb3cc173765597de5bc7c4a783544a76def605ad64c02aba17ef45ca3"
	I1019 12:18:12.983974  303892 cri.go:89] found id: "c1af9139ef29a1d92c70afefa4ebf2ccc782581c328281ec4e2f86b553c3c467"
	I1019 12:18:12.983985  303892 cri.go:89] found id: "c10333b42245b14943c5c33809857b909c2a03945bf30eedb9643814fdd3b23d"
	I1019 12:18:12.983991  303892 cri.go:89] found id: "0e8ae7e9978df10dd5c1ae839fb322082252d2948bb1e640b22d86f207cac350"
	I1019 12:18:12.984010  303892 cri.go:89] found id: "1fbbdaf72898fb8d9d32b6836dde4d8c8bd3aeb32b5e40d0a08e758f67f5eeb9"
	I1019 12:18:12.984016  303892 cri.go:89] found id: "20700ce554fdeeb461937fe8bd8c17a66655f95c7782ad23f8855f6fc85e921d"
	I1019 12:18:12.984022  303892 cri.go:89] found id: "ebc110500cd3df83646f04053eb6ac2cb475cfd7069d77e04732e6c38ee16e85"
	I1019 12:18:12.984029  303892 cri.go:89] found id: "4b12dbb5293748cac62f0aa74605c7890efe62f72b75cd8622373e2ae02a2e7a"
	I1019 12:18:12.984033  303892 cri.go:89] found id: "974f057716664d84b595f63044c6aaf6d840e979157a7453177950977adff06a"
	I1019 12:18:12.984035  303892 cri.go:89] found id: ""
	I1019 12:18:12.984104  303892 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:18:12.999569  303892 out.go:203] 
	W1019 12:18:13.004467  303892 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:18:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:18:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:18:13.004497  303892 out.go:285] * 
	* 
	W1019 12:18:13.011084  303892 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:18:13.014094  303892 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-694780 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.44s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-694780 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-694780 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-694780 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [fd5fdb66-ef4c-42d8-b2a2-cf2e369aad08] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [fd5fdb66-ef4c-42d8-b2a2-cf2e369aad08] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [fd5fdb66-ef4c-42d8-b2a2-cf2e369aad08] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003355432s
addons_test.go:967: (dbg) Run:  kubectl --context addons-694780 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 ssh "cat /opt/local-path-provisioner/pvc-c2e9b24a-4b9e-48a1-a73a-ec392ca86059_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-694780 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-694780 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-694780 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (305.908347ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:17:54.739169  303655 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:17:54.740141  303655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:17:54.740165  303655 out.go:374] Setting ErrFile to fd 2...
	I1019 12:17:54.740172  303655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:17:54.740516  303655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:17:54.740894  303655 mustload.go:65] Loading cluster: addons-694780
	I1019 12:17:54.741336  303655 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:17:54.741362  303655 addons.go:606] checking whether the cluster is paused
	I1019 12:17:54.741574  303655 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:17:54.741609  303655 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:17:54.742191  303655 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:17:54.762237  303655 ssh_runner.go:195] Run: systemctl --version
	I1019 12:17:54.762303  303655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:17:54.782901  303655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:17:54.888561  303655 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:17:54.888650  303655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:17:54.943382  303655 cri.go:89] found id: "babbcf90f6ac904ad2f1c25a59f3fae6037578ff6c985c97854cc8e67861c441"
	I1019 12:17:54.943411  303655 cri.go:89] found id: "4af26279aa6f2eea07df4d6dc9cb321abc82026e12a909ecae07903da4702995"
	I1019 12:17:54.943416  303655 cri.go:89] found id: "dbc7b2d7b48c237972a607237cc3947e59c7b5443b011c68106e9c392f0d975d"
	I1019 12:17:54.943420  303655 cri.go:89] found id: "53d99b9c1fa5a95c9e37f1a45f62cd51c1afdda905ffeb6b9248b13034343462"
	I1019 12:17:54.943423  303655 cri.go:89] found id: "1159fff2343a5c3e477dab117e1ff2e1a6416a99cdfb9f1705fbd592646d9832"
	I1019 12:17:54.943426  303655 cri.go:89] found id: "976c559427e0253107c6466d60d473a0039bdf7878194ad5bdaca6966253b26b"
	I1019 12:17:54.943429  303655 cri.go:89] found id: "4e8fe40f4a508cc1d1ac055b2b1bf2c19b1903cd3d5775fc32b7874ac809c0d8"
	I1019 12:17:54.943432  303655 cri.go:89] found id: "3c758f6c5602f3f9b9443ccc165652180df691ad854e4d71ce3f716ff6f9a39b"
	I1019 12:17:54.943435  303655 cri.go:89] found id: "c93fad6f2f68178d142a7ba603152834d1f7d544574f809291adbea8ae600e2a"
	I1019 12:17:54.943441  303655 cri.go:89] found id: "d66a0ce31c46f79abdc4cf890ad0bf9a061e0e382b03fc34c6d7bddbfe74e583"
	I1019 12:17:54.943444  303655 cri.go:89] found id: "019ec1d7cee73b30cbc0eb97d1a28afba7149627fff4e81ca5ad784b17e42ce6"
	I1019 12:17:54.943448  303655 cri.go:89] found id: "ad7a2781a873fe6c6ec31e43f52230ed09385cd447ef4cbd60561041e64afaaf"
	I1019 12:17:54.943451  303655 cri.go:89] found id: "80882ef14df043e6e23e23bb0ae867fdf8b865123d2ff32882a7c44cffea2388"
	I1019 12:17:54.943454  303655 cri.go:89] found id: "795c9019de22203a4870134a91ae2e2344e2a0d9a3c45ee2ca515e2465ef1af7"
	I1019 12:17:54.943458  303655 cri.go:89] found id: "1a89d3feb3cc173765597de5bc7c4a783544a76def605ad64c02aba17ef45ca3"
	I1019 12:17:54.943463  303655 cri.go:89] found id: "c1af9139ef29a1d92c70afefa4ebf2ccc782581c328281ec4e2f86b553c3c467"
	I1019 12:17:54.943475  303655 cri.go:89] found id: "c10333b42245b14943c5c33809857b909c2a03945bf30eedb9643814fdd3b23d"
	I1019 12:17:54.943479  303655 cri.go:89] found id: "0e8ae7e9978df10dd5c1ae839fb322082252d2948bb1e640b22d86f207cac350"
	I1019 12:17:54.943482  303655 cri.go:89] found id: "1fbbdaf72898fb8d9d32b6836dde4d8c8bd3aeb32b5e40d0a08e758f67f5eeb9"
	I1019 12:17:54.943485  303655 cri.go:89] found id: "20700ce554fdeeb461937fe8bd8c17a66655f95c7782ad23f8855f6fc85e921d"
	I1019 12:17:54.943490  303655 cri.go:89] found id: "ebc110500cd3df83646f04053eb6ac2cb475cfd7069d77e04732e6c38ee16e85"
	I1019 12:17:54.943493  303655 cri.go:89] found id: "4b12dbb5293748cac62f0aa74605c7890efe62f72b75cd8622373e2ae02a2e7a"
	I1019 12:17:54.943496  303655 cri.go:89] found id: "974f057716664d84b595f63044c6aaf6d840e979157a7453177950977adff06a"
	I1019 12:17:54.943499  303655 cri.go:89] found id: ""
	I1019 12:17:54.943555  303655 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:17:54.964434  303655 out.go:203] 
	W1019 12:17:54.967710  303655 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:17:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:17:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:17:54.967796  303655 out.go:285] * 
	* 
	W1019 12:17:54.974319  303655 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:17:54.978187  303655 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-694780 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.44s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-rl6ct" [1169929a-70c6-44e8-a514-f532fb25a448] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003260632s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-694780 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (300.802681ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:18:06.509243  303818 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:18:06.510227  303818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:18:06.510277  303818 out.go:374] Setting ErrFile to fd 2...
	I1019 12:18:06.510300  303818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:18:06.510620  303818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:18:06.510964  303818 mustload.go:65] Loading cluster: addons-694780
	I1019 12:18:06.511370  303818 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:18:06.511408  303818 addons.go:606] checking whether the cluster is paused
	I1019 12:18:06.511532  303818 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:18:06.511567  303818 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:18:06.512046  303818 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:18:06.535547  303818 ssh_runner.go:195] Run: systemctl --version
	I1019 12:18:06.535601  303818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:18:06.556549  303818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:18:06.665484  303818 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:18:06.665576  303818 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:18:06.703441  303818 cri.go:89] found id: "babbcf90f6ac904ad2f1c25a59f3fae6037578ff6c985c97854cc8e67861c441"
	I1019 12:18:06.703461  303818 cri.go:89] found id: "4af26279aa6f2eea07df4d6dc9cb321abc82026e12a909ecae07903da4702995"
	I1019 12:18:06.703466  303818 cri.go:89] found id: "dbc7b2d7b48c237972a607237cc3947e59c7b5443b011c68106e9c392f0d975d"
	I1019 12:18:06.703470  303818 cri.go:89] found id: "53d99b9c1fa5a95c9e37f1a45f62cd51c1afdda905ffeb6b9248b13034343462"
	I1019 12:18:06.703473  303818 cri.go:89] found id: "1159fff2343a5c3e477dab117e1ff2e1a6416a99cdfb9f1705fbd592646d9832"
	I1019 12:18:06.703477  303818 cri.go:89] found id: "976c559427e0253107c6466d60d473a0039bdf7878194ad5bdaca6966253b26b"
	I1019 12:18:06.703481  303818 cri.go:89] found id: "4e8fe40f4a508cc1d1ac055b2b1bf2c19b1903cd3d5775fc32b7874ac809c0d8"
	I1019 12:18:06.703484  303818 cri.go:89] found id: "3c758f6c5602f3f9b9443ccc165652180df691ad854e4d71ce3f716ff6f9a39b"
	I1019 12:18:06.703487  303818 cri.go:89] found id: "c93fad6f2f68178d142a7ba603152834d1f7d544574f809291adbea8ae600e2a"
	I1019 12:18:06.703494  303818 cri.go:89] found id: "d66a0ce31c46f79abdc4cf890ad0bf9a061e0e382b03fc34c6d7bddbfe74e583"
	I1019 12:18:06.703497  303818 cri.go:89] found id: "019ec1d7cee73b30cbc0eb97d1a28afba7149627fff4e81ca5ad784b17e42ce6"
	I1019 12:18:06.703501  303818 cri.go:89] found id: "ad7a2781a873fe6c6ec31e43f52230ed09385cd447ef4cbd60561041e64afaaf"
	I1019 12:18:06.703504  303818 cri.go:89] found id: "80882ef14df043e6e23e23bb0ae867fdf8b865123d2ff32882a7c44cffea2388"
	I1019 12:18:06.703507  303818 cri.go:89] found id: "795c9019de22203a4870134a91ae2e2344e2a0d9a3c45ee2ca515e2465ef1af7"
	I1019 12:18:06.703510  303818 cri.go:89] found id: "1a89d3feb3cc173765597de5bc7c4a783544a76def605ad64c02aba17ef45ca3"
	I1019 12:18:06.703519  303818 cri.go:89] found id: "c1af9139ef29a1d92c70afefa4ebf2ccc782581c328281ec4e2f86b553c3c467"
	I1019 12:18:06.703523  303818 cri.go:89] found id: "c10333b42245b14943c5c33809857b909c2a03945bf30eedb9643814fdd3b23d"
	I1019 12:18:06.703527  303818 cri.go:89] found id: "0e8ae7e9978df10dd5c1ae839fb322082252d2948bb1e640b22d86f207cac350"
	I1019 12:18:06.703531  303818 cri.go:89] found id: "1fbbdaf72898fb8d9d32b6836dde4d8c8bd3aeb32b5e40d0a08e758f67f5eeb9"
	I1019 12:18:06.703534  303818 cri.go:89] found id: "20700ce554fdeeb461937fe8bd8c17a66655f95c7782ad23f8855f6fc85e921d"
	I1019 12:18:06.703538  303818 cri.go:89] found id: "ebc110500cd3df83646f04053eb6ac2cb475cfd7069d77e04732e6c38ee16e85"
	I1019 12:18:06.703541  303818 cri.go:89] found id: "4b12dbb5293748cac62f0aa74605c7890efe62f72b75cd8622373e2ae02a2e7a"
	I1019 12:18:06.703548  303818 cri.go:89] found id: "974f057716664d84b595f63044c6aaf6d840e979157a7453177950977adff06a"
	I1019 12:18:06.703551  303818 cri.go:89] found id: ""
	I1019 12:18:06.703599  303818 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:18:06.722638  303818 out.go:203] 
	W1019 12:18:06.725629  303818 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:18:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:18:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:18:06.725733  303818 out.go:285] * 
	* 
	W1019 12:18:06.732027  303818 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:18:06.735162  303818 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-694780 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.31s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-wwfqw" [eb50ecf9-27c4-4f07-9165-0aea12aaac8a] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003182755s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-694780 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-694780 addons disable yakd --alsologtostderr -v=1: exit status 11 (447.639033ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:18:00.132169  303757 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:18:00.133199  303757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:18:00.133230  303757 out.go:374] Setting ErrFile to fd 2...
	I1019 12:18:00.133237  303757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:18:00.133586  303757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:18:00.134013  303757 mustload.go:65] Loading cluster: addons-694780
	I1019 12:18:00.134546  303757 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:18:00.134574  303757 addons.go:606] checking whether the cluster is paused
	I1019 12:18:00.134690  303757 config.go:182] Loaded profile config "addons-694780": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:18:00.134708  303757 host.go:66] Checking if "addons-694780" exists ...
	I1019 12:18:00.135230  303757 cli_runner.go:164] Run: docker container inspect addons-694780 --format={{.State.Status}}
	I1019 12:18:00.186789  303757 ssh_runner.go:195] Run: systemctl --version
	I1019 12:18:00.186860  303757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-694780
	I1019 12:18:00.219210  303757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/addons-694780/id_rsa Username:docker}
	I1019 12:18:00.346389  303757 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:18:00.346503  303757 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:18:00.395425  303757 cri.go:89] found id: "babbcf90f6ac904ad2f1c25a59f3fae6037578ff6c985c97854cc8e67861c441"
	I1019 12:18:00.395469  303757 cri.go:89] found id: "4af26279aa6f2eea07df4d6dc9cb321abc82026e12a909ecae07903da4702995"
	I1019 12:18:00.395476  303757 cri.go:89] found id: "dbc7b2d7b48c237972a607237cc3947e59c7b5443b011c68106e9c392f0d975d"
	I1019 12:18:00.395483  303757 cri.go:89] found id: "53d99b9c1fa5a95c9e37f1a45f62cd51c1afdda905ffeb6b9248b13034343462"
	I1019 12:18:00.395488  303757 cri.go:89] found id: "1159fff2343a5c3e477dab117e1ff2e1a6416a99cdfb9f1705fbd592646d9832"
	I1019 12:18:00.395494  303757 cri.go:89] found id: "976c559427e0253107c6466d60d473a0039bdf7878194ad5bdaca6966253b26b"
	I1019 12:18:00.395498  303757 cri.go:89] found id: "4e8fe40f4a508cc1d1ac055b2b1bf2c19b1903cd3d5775fc32b7874ac809c0d8"
	I1019 12:18:00.395501  303757 cri.go:89] found id: "3c758f6c5602f3f9b9443ccc165652180df691ad854e4d71ce3f716ff6f9a39b"
	I1019 12:18:00.395505  303757 cri.go:89] found id: "c93fad6f2f68178d142a7ba603152834d1f7d544574f809291adbea8ae600e2a"
	I1019 12:18:00.395513  303757 cri.go:89] found id: "d66a0ce31c46f79abdc4cf890ad0bf9a061e0e382b03fc34c6d7bddbfe74e583"
	I1019 12:18:00.395519  303757 cri.go:89] found id: "019ec1d7cee73b30cbc0eb97d1a28afba7149627fff4e81ca5ad784b17e42ce6"
	I1019 12:18:00.395522  303757 cri.go:89] found id: "ad7a2781a873fe6c6ec31e43f52230ed09385cd447ef4cbd60561041e64afaaf"
	I1019 12:18:00.395526  303757 cri.go:89] found id: "80882ef14df043e6e23e23bb0ae867fdf8b865123d2ff32882a7c44cffea2388"
	I1019 12:18:00.395537  303757 cri.go:89] found id: "795c9019de22203a4870134a91ae2e2344e2a0d9a3c45ee2ca515e2465ef1af7"
	I1019 12:18:00.395546  303757 cri.go:89] found id: "1a89d3feb3cc173765597de5bc7c4a783544a76def605ad64c02aba17ef45ca3"
	I1019 12:18:00.395552  303757 cri.go:89] found id: "c1af9139ef29a1d92c70afefa4ebf2ccc782581c328281ec4e2f86b553c3c467"
	I1019 12:18:00.395555  303757 cri.go:89] found id: "c10333b42245b14943c5c33809857b909c2a03945bf30eedb9643814fdd3b23d"
	I1019 12:18:00.395563  303757 cri.go:89] found id: "0e8ae7e9978df10dd5c1ae839fb322082252d2948bb1e640b22d86f207cac350"
	I1019 12:18:00.395566  303757 cri.go:89] found id: "1fbbdaf72898fb8d9d32b6836dde4d8c8bd3aeb32b5e40d0a08e758f67f5eeb9"
	I1019 12:18:00.395570  303757 cri.go:89] found id: "20700ce554fdeeb461937fe8bd8c17a66655f95c7782ad23f8855f6fc85e921d"
	I1019 12:18:00.395576  303757 cri.go:89] found id: "ebc110500cd3df83646f04053eb6ac2cb475cfd7069d77e04732e6c38ee16e85"
	I1019 12:18:00.395583  303757 cri.go:89] found id: "4b12dbb5293748cac62f0aa74605c7890efe62f72b75cd8622373e2ae02a2e7a"
	I1019 12:18:00.395587  303757 cri.go:89] found id: "974f057716664d84b595f63044c6aaf6d840e979157a7453177950977adff06a"
	I1019 12:18:00.395590  303757 cri.go:89] found id: ""
	I1019 12:18:00.395658  303757 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:18:00.417353  303757 out.go:203] 
	W1019 12:18:00.420436  303757 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:18:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:18:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:18:00.420468  303757 out.go:285] * 
	* 
	W1019 12:18:00.427325  303757 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:18:00.430250  303757 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-694780 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-970848 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-970848 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-tvhcp" [52e3bd49-6d1d-4049-9e06-2ff8ced33393] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-970848 -n functional-970848
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-19 12:33:58.041019762 +0000 UTC m=+1232.687839848
functional_test.go:1645: (dbg) Run:  kubectl --context functional-970848 describe po hello-node-connect-7d85dfc575-tvhcp -n default
functional_test.go:1645: (dbg) kubectl --context functional-970848 describe po hello-node-connect-7d85dfc575-tvhcp -n default:
Name:             hello-node-connect-7d85dfc575-tvhcp
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-970848/192.168.49.2
Start Time:       Sun, 19 Oct 2025 12:23:57 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jcglx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-jcglx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-tvhcp to functional-970848
Normal   Pulling    6m54s (x5 over 9m59s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m54s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m54s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m42s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-970848 logs hello-node-connect-7d85dfc575-tvhcp -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-970848 logs hello-node-connect-7d85dfc575-tvhcp -n default: exit status 1 (99.619351ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-tvhcp" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-970848 logs hello-node-connect-7d85dfc575-tvhcp -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-970848 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-tvhcp
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-970848/192.168.49.2
Start Time:       Sun, 19 Oct 2025 12:23:57 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jcglx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-jcglx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-tvhcp to functional-970848
Normal   Pulling    6m54s (x5 over 9m59s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m54s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m54s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m42s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-970848 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-970848 logs -l app=hello-node-connect: exit status 1 (79.977501ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-tvhcp" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-970848 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-970848 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.98.62.239
IPs:                      10.98.62.239
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31004/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-970848
helpers_test.go:243: (dbg) docker inspect functional-970848:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6d1ca507738e6d5eeae01a5cf48f063ba1eea3cd8c50be6d74ec1f65f6295b5",
	        "Created": "2025-10-19T12:21:04.813423263Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 310332,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:21:04.881607575Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/e6d1ca507738e6d5eeae01a5cf48f063ba1eea3cd8c50be6d74ec1f65f6295b5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6d1ca507738e6d5eeae01a5cf48f063ba1eea3cd8c50be6d74ec1f65f6295b5/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6d1ca507738e6d5eeae01a5cf48f063ba1eea3cd8c50be6d74ec1f65f6295b5/hosts",
	        "LogPath": "/var/lib/docker/containers/e6d1ca507738e6d5eeae01a5cf48f063ba1eea3cd8c50be6d74ec1f65f6295b5/e6d1ca507738e6d5eeae01a5cf48f063ba1eea3cd8c50be6d74ec1f65f6295b5-json.log",
	        "Name": "/functional-970848",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-970848:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-970848",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6d1ca507738e6d5eeae01a5cf48f063ba1eea3cd8c50be6d74ec1f65f6295b5",
	                "LowerDir": "/var/lib/docker/overlay2/6a5e361b43fc410771a844483051115424b1d0564df46c36a7bd8566849908c1-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6a5e361b43fc410771a844483051115424b1d0564df46c36a7bd8566849908c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6a5e361b43fc410771a844483051115424b1d0564df46c36a7bd8566849908c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6a5e361b43fc410771a844483051115424b1d0564df46c36a7bd8566849908c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-970848",
	                "Source": "/var/lib/docker/volumes/functional-970848/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-970848",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-970848",
	                "name.minikube.sigs.k8s.io": "functional-970848",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e0729ae6eed087772520004961b110ed68c9b73afa005e975de818f49c864ee",
	            "SandboxKey": "/var/run/docker/netns/6e0729ae6eed",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-970848": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:fe:93:d2:01:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1955984e340b32b6fae23cf5652de44fad9d5ef8d611d3de8efa3642db76e630",
	                    "EndpointID": "520b645fb0a860befb254773bcb16fed88032cd35d6259078bab27285d8a31a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-970848",
	                        "e6d1ca507738"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-970848 -n functional-970848
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-970848 logs -n 25: (1.7168996s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-970848 ssh echo hello                                                                                                                          │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ image   │ functional-970848 image load --daemon kicbase/echo-server:functional-970848 --alsologtostderr                                                             │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ ssh     │ functional-970848 ssh cat /etc/hostname                                                                                                                   │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ tunnel  │ functional-970848 tunnel --alsologtostderr                                                                                                                │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │                     │
	│ tunnel  │ functional-970848 tunnel --alsologtostderr                                                                                                                │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │                     │
	│ image   │ functional-970848 image ls                                                                                                                                │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ image   │ functional-970848 image load --daemon kicbase/echo-server:functional-970848 --alsologtostderr                                                             │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ tunnel  │ functional-970848 tunnel --alsologtostderr                                                                                                                │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │                     │
	│ image   │ functional-970848 image ls                                                                                                                                │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ image   │ functional-970848 image load --daemon kicbase/echo-server:functional-970848 --alsologtostderr                                                             │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ image   │ functional-970848 image ls                                                                                                                                │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ image   │ functional-970848 image save kicbase/echo-server:functional-970848 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ image   │ functional-970848 image rm kicbase/echo-server:functional-970848 --alsologtostderr                                                                        │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ image   │ functional-970848 image ls                                                                                                                                │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ image   │ functional-970848 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ image   │ functional-970848 image save --daemon kicbase/echo-server:functional-970848 --alsologtostderr                                                             │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ ssh     │ functional-970848 ssh sudo cat /etc/ssl/certs/294518.pem                                                                                                  │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ ssh     │ functional-970848 ssh sudo cat /usr/share/ca-certificates/294518.pem                                                                                      │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ ssh     │ functional-970848 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ ssh     │ functional-970848 ssh sudo cat /etc/ssl/certs/2945182.pem                                                                                                 │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ ssh     │ functional-970848 ssh sudo cat /usr/share/ca-certificates/2945182.pem                                                                                     │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ ssh     │ functional-970848 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ addons  │ functional-970848 addons list                                                                                                                             │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ addons  │ functional-970848 addons list -o json                                                                                                                     │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	│ ssh     │ functional-970848 ssh sudo cat /etc/test/nested/copy/294518/hosts                                                                                         │ functional-970848 │ jenkins │ v1.37.0 │ 19 Oct 25 12:23 UTC │ 19 Oct 25 12:23 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:22:56
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:22:56.704793  314472 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:22:56.704913  314472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:22:56.704916  314472 out.go:374] Setting ErrFile to fd 2...
	I1019 12:22:56.704919  314472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:22:56.705160  314472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:22:56.705500  314472 out.go:368] Setting JSON to false
	I1019 12:22:56.706399  314472 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7527,"bootTime":1760869050,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 12:22:56.706473  314472 start.go:141] virtualization:  
	I1019 12:22:56.710169  314472 out.go:179] * [functional-970848] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 12:22:56.713218  314472 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:22:56.713267  314472 notify.go:220] Checking for updates...
	I1019 12:22:56.719364  314472 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:22:56.722203  314472 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 12:22:56.725049  314472 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 12:22:56.727843  314472 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 12:22:56.730623  314472 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:22:56.734068  314472 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:22:56.734196  314472 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:22:56.755274  314472 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 12:22:56.755383  314472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:22:56.834993  314472 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-19 12:22:56.824661042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 12:22:56.835084  314472 docker.go:318] overlay module found
	I1019 12:22:56.838191  314472 out.go:179] * Using the docker driver based on existing profile
	I1019 12:22:56.840991  314472 start.go:305] selected driver: docker
	I1019 12:22:56.840999  314472 start.go:925] validating driver "docker" against &{Name:functional-970848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-970848 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:22:56.841083  314472 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:22:56.841194  314472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:22:56.899920  314472 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-19 12:22:56.890356012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 12:22:56.900326  314472 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:22:56.900348  314472 cni.go:84] Creating CNI manager for ""
	I1019 12:22:56.900404  314472 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:22:56.900443  314472 start.go:349] cluster config:
	{Name:functional-970848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-970848 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:22:56.903595  314472 out.go:179] * Starting "functional-970848" primary control-plane node in "functional-970848" cluster
	I1019 12:22:56.906442  314472 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:22:56.909395  314472 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:22:56.912134  314472 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:22:56.912178  314472 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 12:22:56.912186  314472 cache.go:58] Caching tarball of preloaded images
	I1019 12:22:56.912185  314472 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:22:56.912262  314472 preload.go:233] Found /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 12:22:56.912271  314472 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:22:56.912374  314472 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/config.json ...
	I1019 12:22:56.931497  314472 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 12:22:56.931508  314472 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 12:22:56.931527  314472 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:22:56.931549  314472 start.go:360] acquireMachinesLock for functional-970848: {Name:mk7a7d83b5bb13185db79fda49fc386135f07b5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:22:56.931613  314472 start.go:364] duration metric: took 47.64µs to acquireMachinesLock for "functional-970848"
	I1019 12:22:56.931632  314472 start.go:96] Skipping create...Using existing machine configuration
	I1019 12:22:56.931636  314472 fix.go:54] fixHost starting: 
	I1019 12:22:56.931910  314472 cli_runner.go:164] Run: docker container inspect functional-970848 --format={{.State.Status}}
	I1019 12:22:56.950118  314472 fix.go:112] recreateIfNeeded on functional-970848: state=Running err=<nil>
	W1019 12:22:56.950139  314472 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 12:22:56.953262  314472 out.go:252] * Updating the running docker "functional-970848" container ...
	I1019 12:22:56.953284  314472 machine.go:93] provisionDockerMachine start ...
	I1019 12:22:56.953360  314472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970848
	I1019 12:22:56.970204  314472 main.go:141] libmachine: Using SSH client type: native
	I1019 12:22:56.970516  314472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1019 12:22:56.970523  314472 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:22:57.121410  314472 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-970848
	
	I1019 12:22:57.121432  314472 ubuntu.go:182] provisioning hostname "functional-970848"
	I1019 12:22:57.121544  314472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970848
	I1019 12:22:57.139661  314472 main.go:141] libmachine: Using SSH client type: native
	I1019 12:22:57.139984  314472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1019 12:22:57.139993  314472 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-970848 && echo "functional-970848" | sudo tee /etc/hostname
	I1019 12:22:57.299213  314472 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-970848
	
	I1019 12:22:57.299289  314472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970848
	I1019 12:22:57.317435  314472 main.go:141] libmachine: Using SSH client type: native
	I1019 12:22:57.317837  314472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1019 12:22:57.317852  314472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-970848' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-970848/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-970848' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:22:57.470166  314472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:22:57.470183  314472 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 12:22:57.470207  314472 ubuntu.go:190] setting up certificates
	I1019 12:22:57.470228  314472 provision.go:84] configureAuth start
	I1019 12:22:57.470285  314472 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-970848
	I1019 12:22:57.488892  314472 provision.go:143] copyHostCerts
	I1019 12:22:57.488959  314472 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem, removing ...
	I1019 12:22:57.488976  314472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem
	I1019 12:22:57.489047  314472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 12:22:57.489182  314472 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem, removing ...
	I1019 12:22:57.489187  314472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem
	I1019 12:22:57.489211  314472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 12:22:57.489260  314472 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem, removing ...
	I1019 12:22:57.489263  314472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem
	I1019 12:22:57.489284  314472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 12:22:57.489344  314472 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.functional-970848 san=[127.0.0.1 192.168.49.2 functional-970848 localhost minikube]
	I1019 12:22:58.015748  314472 provision.go:177] copyRemoteCerts
	I1019 12:22:58.015803  314472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:22:58.015844  314472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970848
	I1019 12:22:58.034059  314472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/functional-970848/id_rsa Username:docker}
	I1019 12:22:58.138406  314472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 12:22:58.164319  314472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 12:22:58.182442  314472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:22:58.200559  314472 provision.go:87] duration metric: took 730.3083ms to configureAuth
	I1019 12:22:58.200577  314472 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:22:58.200808  314472 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:22:58.200919  314472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970848
	I1019 12:22:58.217929  314472 main.go:141] libmachine: Using SSH client type: native
	I1019 12:22:58.218282  314472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1019 12:22:58.218294  314472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:23:03.581942  314472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:23:03.581956  314472 machine.go:96] duration metric: took 6.628665292s to provisionDockerMachine
	I1019 12:23:03.581964  314472 start.go:293] postStartSetup for "functional-970848" (driver="docker")
	I1019 12:23:03.581974  314472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:23:03.582032  314472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:23:03.582087  314472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970848
	I1019 12:23:03.600144  314472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/functional-970848/id_rsa Username:docker}
	I1019 12:23:03.701567  314472 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:23:03.704790  314472 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:23:03.704806  314472 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:23:03.704817  314472 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/addons for local assets ...
	I1019 12:23:03.704871  314472 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/files for local assets ...
	I1019 12:23:03.704952  314472 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem -> 2945182.pem in /etc/ssl/certs
	I1019 12:23:03.705030  314472 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/test/nested/copy/294518/hosts -> hosts in /etc/test/nested/copy/294518
	I1019 12:23:03.705070  314472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/294518
	I1019 12:23:03.712438  314472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 12:23:03.729130  314472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/test/nested/copy/294518/hosts --> /etc/test/nested/copy/294518/hosts (40 bytes)
	I1019 12:23:03.746406  314472 start.go:296] duration metric: took 164.427343ms for postStartSetup
	I1019 12:23:03.746497  314472 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:23:03.746550  314472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970848
	I1019 12:23:03.763529  314472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/functional-970848/id_rsa Username:docker}
	I1019 12:23:03.862670  314472 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:23:03.867264  314472 fix.go:56] duration metric: took 6.935620577s for fixHost
	I1019 12:23:03.867280  314472 start.go:83] releasing machines lock for "functional-970848", held for 6.935659643s
	I1019 12:23:03.867350  314472 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-970848
	I1019 12:23:03.884513  314472 ssh_runner.go:195] Run: cat /version.json
	I1019 12:23:03.884560  314472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970848
	I1019 12:23:03.884828  314472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:23:03.884880  314472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970848
	I1019 12:23:03.903503  314472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/functional-970848/id_rsa Username:docker}
	I1019 12:23:03.905935  314472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/functional-970848/id_rsa Username:docker}
	I1019 12:23:04.097752  314472 ssh_runner.go:195] Run: systemctl --version
	I1019 12:23:04.104368  314472 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:23:04.141288  314472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:23:04.145764  314472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:23:04.145831  314472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:23:04.153597  314472 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 12:23:04.153612  314472 start.go:495] detecting cgroup driver to use...
	I1019 12:23:04.153642  314472 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 12:23:04.153704  314472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:23:04.169382  314472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:23:04.182572  314472 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:23:04.182629  314472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:23:04.198584  314472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:23:04.212029  314472 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:23:04.352535  314472 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:23:04.490807  314472 docker.go:234] disabling docker service ...
	I1019 12:23:04.490874  314472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:23:04.506911  314472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:23:04.520419  314472 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:23:04.656932  314472 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:23:04.795224  314472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:23:04.808826  314472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:23:04.822903  314472 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:23:04.822957  314472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:23:04.831835  314472 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 12:23:04.831906  314472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:23:04.841007  314472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:23:04.849940  314472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:23:04.858853  314472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:23:04.870632  314472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:23:04.880034  314472 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:23:04.888735  314472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:23:04.898626  314472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:23:04.906156  314472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:23:04.913692  314472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:23:05.044322  314472 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:23:11.352089  314472 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.307743759s)
	I1019 12:23:11.352105  314472 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:23:11.352159  314472 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:23:11.355879  314472 start.go:563] Will wait 60s for crictl version
	I1019 12:23:11.355938  314472 ssh_runner.go:195] Run: which crictl
	I1019 12:23:11.359509  314472 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:23:11.384238  314472 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:23:11.384319  314472 ssh_runner.go:195] Run: crio --version
	I1019 12:23:11.412648  314472 ssh_runner.go:195] Run: crio --version
	I1019 12:23:11.443563  314472 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:23:11.446513  314472 cli_runner.go:164] Run: docker network inspect functional-970848 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:23:11.462563  314472 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1019 12:23:11.469850  314472 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1019 12:23:11.472672  314472 kubeadm.go:883] updating cluster {Name:functional-970848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-970848 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:23:11.472792  314472 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:23:11.472865  314472 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:23:11.510104  314472 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:23:11.510115  314472 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:23:11.510173  314472 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:23:11.537134  314472 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:23:11.537147  314472 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:23:11.537153  314472 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1019 12:23:11.537253  314472 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-970848 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-970848 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:23:11.537334  314472 ssh_runner.go:195] Run: crio config
	I1019 12:23:11.601407  314472 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1019 12:23:11.601499  314472 cni.go:84] Creating CNI manager for ""
	I1019 12:23:11.601509  314472 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:23:11.601531  314472 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:23:11.601575  314472 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-970848 NodeName:functional-970848 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:23:11.601740  314472 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-970848"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:23:11.601822  314472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:23:11.609736  314472 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:23:11.609797  314472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:23:11.617454  314472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 12:23:11.630460  314472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:23:11.643231  314472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1019 12:23:11.656399  314472 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:23:11.660183  314472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:23:11.795889  314472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:23:11.809114  314472 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848 for IP: 192.168.49.2
	I1019 12:23:11.809125  314472 certs.go:195] generating shared ca certs ...
	I1019 12:23:11.809140  314472 certs.go:227] acquiring lock for ca certs: {Name:mk8f2f1c683cf5104ef70f6f3d59bf8f6240d633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:23:11.809280  314472 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key
	I1019 12:23:11.809324  314472 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key
	I1019 12:23:11.809331  314472 certs.go:257] generating profile certs ...
	I1019 12:23:11.809420  314472 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.key
	I1019 12:23:11.809465  314472 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/apiserver.key.5fd7c2d9
	I1019 12:23:11.809500  314472 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/proxy-client.key
	I1019 12:23:11.809606  314472 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem (1338 bytes)
	W1019 12:23:11.809633  314472 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518_empty.pem, impossibly tiny 0 bytes
	I1019 12:23:11.809640  314472 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 12:23:11.809662  314472 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:23:11.809739  314472 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:23:11.809764  314472 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem (1679 bytes)
	I1019 12:23:11.809804  314472 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 12:23:11.810350  314472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:23:11.829069  314472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:23:11.846579  314472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:23:11.864605  314472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 12:23:11.882746  314472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 12:23:11.900072  314472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:23:11.917117  314472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:23:11.934468  314472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:23:11.951972  314472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:23:11.969456  314472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem --> /usr/share/ca-certificates/294518.pem (1338 bytes)
	I1019 12:23:11.987370  314472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /usr/share/ca-certificates/2945182.pem (1708 bytes)
	I1019 12:23:12.007191  314472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:23:12.021526  314472 ssh_runner.go:195] Run: openssl version
	I1019 12:23:12.028495  314472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2945182.pem && ln -fs /usr/share/ca-certificates/2945182.pem /etc/ssl/certs/2945182.pem"
	I1019 12:23:12.037539  314472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2945182.pem
	I1019 12:23:12.041540  314472 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:20 /usr/share/ca-certificates/2945182.pem
	I1019 12:23:12.041598  314472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2945182.pem
	I1019 12:23:12.084767  314472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2945182.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:23:12.092606  314472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:23:12.101180  314472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:23:12.105040  314472 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:23:12.105095  314472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:23:12.146052  314472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:23:12.153891  314472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294518.pem && ln -fs /usr/share/ca-certificates/294518.pem /etc/ssl/certs/294518.pem"
	I1019 12:23:12.162427  314472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294518.pem
	I1019 12:23:12.166200  314472 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:20 /usr/share/ca-certificates/294518.pem
	I1019 12:23:12.166255  314472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294518.pem
	I1019 12:23:12.207038  314472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294518.pem /etc/ssl/certs/51391683.0"
	I1019 12:23:12.214920  314472 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:23:12.218754  314472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 12:23:12.259475  314472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 12:23:12.300150  314472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 12:23:12.340825  314472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 12:23:12.381721  314472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 12:23:12.422547  314472 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 12:23:12.463497  314472 kubeadm.go:400] StartCluster: {Name:functional-970848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-970848 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:23:12.463578  314472 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:23:12.463653  314472 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:23:12.498100  314472 cri.go:89] found id: "ece0b55d642c57ae4069803e77b1922a35efeb6b781672ca574f56a28b1803b9"
	I1019 12:23:12.498111  314472 cri.go:89] found id: "f4d9b8fcbb05c778bc4810a4c4a38f9a8aad2177f4d9855b996744550ff65802"
	I1019 12:23:12.498115  314472 cri.go:89] found id: "fe862a7ff67b82c14b83bdcdb5138af6dd686bd04ee9a6b61df9628c4ff06b22"
	I1019 12:23:12.498117  314472 cri.go:89] found id: "9b3e546325149901014206e697c01cfa4ddabed75b785f3d8083cf7dcfccbd8a"
	I1019 12:23:12.498120  314472 cri.go:89] found id: "d9ff8803bdbd60a2709366efaed595db6272eb55e7d3fb922872e3f46025119e"
	I1019 12:23:12.498123  314472 cri.go:89] found id: "11491746337bc36377f826d2c942ba20cf5d8915daa08ccc7282bae2d6f46809"
	I1019 12:23:12.498125  314472 cri.go:89] found id: "c8483ff3ddc4f6ec4b072517520542733cdb7c795c45d8a1fd79227388a5a433"
	I1019 12:23:12.498127  314472 cri.go:89] found id: "b612cb435dab824a4aa5e7289c09d9f9a80fc6ed5f6fd5d2b7eb54f21d4a5d8e"
	I1019 12:23:12.498130  314472 cri.go:89] found id: "e42b5b12eade0ee0a3fde12f87117eaac1c460aea3bca05c01889d44728f9c94"
	I1019 12:23:12.498136  314472 cri.go:89] found id: "8fbb6ff13bf5848cce546d3d9f8aa47fbb221c89cd46f5123e999f034e93cdc9"
	I1019 12:23:12.498138  314472 cri.go:89] found id: "00bf23de2c9ed4224c0d1177cf2ed9bc0b1d604961f284be6e247459fc1d537b"
	I1019 12:23:12.498141  314472 cri.go:89] found id: "4dbf8b8a2091ce35760dc682bd94ab2b76c5c2c268e67e6ae94cf41e8ad0dce6"
	I1019 12:23:12.498143  314472 cri.go:89] found id: "622de35a18e4c89a9d31ad77f16d4eba3c648c24b9f754749e469787e8a85862"
	I1019 12:23:12.498145  314472 cri.go:89] found id: "65c9ddb6d95d40cbc4b0a22e6d7ae21a425a0572bb2711bfd584c598d1b619b6"
	I1019 12:23:12.498149  314472 cri.go:89] found id: "d6f9cc711f11bbfd0f2cd94cc5fa6ef76c1f69b165f84bcebaed02756103899a"
	I1019 12:23:12.498152  314472 cri.go:89] found id: "0ea3728dbb499620099bed371bfc6e6fc3b17fda62069111c040404fa797cd5b"
	I1019 12:23:12.498157  314472 cri.go:89] found id: ""
	I1019 12:23:12.498204  314472 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 12:23:12.508735  314472 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:23:12Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:23:12.508801  314472 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:23:12.516306  314472 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 12:23:12.516314  314472 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 12:23:12.516361  314472 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 12:23:12.523329  314472 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:23:12.523834  314472 kubeconfig.go:125] found "functional-970848" server: "https://192.168.49.2:8441"
	I1019 12:23:12.525056  314472 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 12:23:12.532742  314472 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-19 12:21:14.600147805 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-19 12:23:11.649793223 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1019 12:23:12.532750  314472 kubeadm.go:1160] stopping kube-system containers ...
	I1019 12:23:12.532761  314472 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1019 12:23:12.532815  314472 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:23:12.562364  314472 cri.go:89] found id: "ece0b55d642c57ae4069803e77b1922a35efeb6b781672ca574f56a28b1803b9"
	I1019 12:23:12.562375  314472 cri.go:89] found id: "f4d9b8fcbb05c778bc4810a4c4a38f9a8aad2177f4d9855b996744550ff65802"
	I1019 12:23:12.562378  314472 cri.go:89] found id: "fe862a7ff67b82c14b83bdcdb5138af6dd686bd04ee9a6b61df9628c4ff06b22"
	I1019 12:23:12.562381  314472 cri.go:89] found id: "9b3e546325149901014206e697c01cfa4ddabed75b785f3d8083cf7dcfccbd8a"
	I1019 12:23:12.562383  314472 cri.go:89] found id: "d9ff8803bdbd60a2709366efaed595db6272eb55e7d3fb922872e3f46025119e"
	I1019 12:23:12.562386  314472 cri.go:89] found id: "11491746337bc36377f826d2c942ba20cf5d8915daa08ccc7282bae2d6f46809"
	I1019 12:23:12.562388  314472 cri.go:89] found id: "c8483ff3ddc4f6ec4b072517520542733cdb7c795c45d8a1fd79227388a5a433"
	I1019 12:23:12.562391  314472 cri.go:89] found id: "b612cb435dab824a4aa5e7289c09d9f9a80fc6ed5f6fd5d2b7eb54f21d4a5d8e"
	I1019 12:23:12.562393  314472 cri.go:89] found id: "e42b5b12eade0ee0a3fde12f87117eaac1c460aea3bca05c01889d44728f9c94"
	I1019 12:23:12.562400  314472 cri.go:89] found id: "8fbb6ff13bf5848cce546d3d9f8aa47fbb221c89cd46f5123e999f034e93cdc9"
	I1019 12:23:12.562402  314472 cri.go:89] found id: "00bf23de2c9ed4224c0d1177cf2ed9bc0b1d604961f284be6e247459fc1d537b"
	I1019 12:23:12.562404  314472 cri.go:89] found id: "4dbf8b8a2091ce35760dc682bd94ab2b76c5c2c268e67e6ae94cf41e8ad0dce6"
	I1019 12:23:12.562406  314472 cri.go:89] found id: "622de35a18e4c89a9d31ad77f16d4eba3c648c24b9f754749e469787e8a85862"
	I1019 12:23:12.562408  314472 cri.go:89] found id: "65c9ddb6d95d40cbc4b0a22e6d7ae21a425a0572bb2711bfd584c598d1b619b6"
	I1019 12:23:12.562411  314472 cri.go:89] found id: "d6f9cc711f11bbfd0f2cd94cc5fa6ef76c1f69b165f84bcebaed02756103899a"
	I1019 12:23:12.562426  314472 cri.go:89] found id: "0ea3728dbb499620099bed371bfc6e6fc3b17fda62069111c040404fa797cd5b"
	I1019 12:23:12.562429  314472 cri.go:89] found id: ""
	I1019 12:23:12.562434  314472 cri.go:252] Stopping containers: [ece0b55d642c57ae4069803e77b1922a35efeb6b781672ca574f56a28b1803b9 f4d9b8fcbb05c778bc4810a4c4a38f9a8aad2177f4d9855b996744550ff65802 fe862a7ff67b82c14b83bdcdb5138af6dd686bd04ee9a6b61df9628c4ff06b22 9b3e546325149901014206e697c01cfa4ddabed75b785f3d8083cf7dcfccbd8a d9ff8803bdbd60a2709366efaed595db6272eb55e7d3fb922872e3f46025119e 11491746337bc36377f826d2c942ba20cf5d8915daa08ccc7282bae2d6f46809 c8483ff3ddc4f6ec4b072517520542733cdb7c795c45d8a1fd79227388a5a433 b612cb435dab824a4aa5e7289c09d9f9a80fc6ed5f6fd5d2b7eb54f21d4a5d8e e42b5b12eade0ee0a3fde12f87117eaac1c460aea3bca05c01889d44728f9c94 8fbb6ff13bf5848cce546d3d9f8aa47fbb221c89cd46f5123e999f034e93cdc9 00bf23de2c9ed4224c0d1177cf2ed9bc0b1d604961f284be6e247459fc1d537b 4dbf8b8a2091ce35760dc682bd94ab2b76c5c2c268e67e6ae94cf41e8ad0dce6 622de35a18e4c89a9d31ad77f16d4eba3c648c24b9f754749e469787e8a85862 65c9ddb6d95d40cbc4b0a22e6d7ae21a425a0572bb2711bfd584c598d1b619b6 d6f9cc711f11bbfd0f2cd94cc5fa6ef76c1f69b16
5f84bcebaed02756103899a 0ea3728dbb499620099bed371bfc6e6fc3b17fda62069111c040404fa797cd5b]
	I1019 12:23:12.562509  314472 ssh_runner.go:195] Run: which crictl
	I1019 12:23:12.566075  314472 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 ece0b55d642c57ae4069803e77b1922a35efeb6b781672ca574f56a28b1803b9 f4d9b8fcbb05c778bc4810a4c4a38f9a8aad2177f4d9855b996744550ff65802 fe862a7ff67b82c14b83bdcdb5138af6dd686bd04ee9a6b61df9628c4ff06b22 9b3e546325149901014206e697c01cfa4ddabed75b785f3d8083cf7dcfccbd8a d9ff8803bdbd60a2709366efaed595db6272eb55e7d3fb922872e3f46025119e 11491746337bc36377f826d2c942ba20cf5d8915daa08ccc7282bae2d6f46809 c8483ff3ddc4f6ec4b072517520542733cdb7c795c45d8a1fd79227388a5a433 b612cb435dab824a4aa5e7289c09d9f9a80fc6ed5f6fd5d2b7eb54f21d4a5d8e e42b5b12eade0ee0a3fde12f87117eaac1c460aea3bca05c01889d44728f9c94 8fbb6ff13bf5848cce546d3d9f8aa47fbb221c89cd46f5123e999f034e93cdc9 00bf23de2c9ed4224c0d1177cf2ed9bc0b1d604961f284be6e247459fc1d537b 4dbf8b8a2091ce35760dc682bd94ab2b76c5c2c268e67e6ae94cf41e8ad0dce6 622de35a18e4c89a9d31ad77f16d4eba3c648c24b9f754749e469787e8a85862 65c9ddb6d95d40cbc4b0a22e6d7ae21a425a0572bb2711bfd584c598d1b619b6 d6f9cc
711f11bbfd0f2cd94cc5fa6ef76c1f69b165f84bcebaed02756103899a 0ea3728dbb499620099bed371bfc6e6fc3b17fda62069111c040404fa797cd5b
	I1019 12:23:12.666945  314472 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1019 12:23:12.772721  314472 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 12:23:12.780670  314472 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct 19 12:21 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct 19 12:21 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct 19 12:21 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct 19 12:21 /etc/kubernetes/scheduler.conf
	
	I1019 12:23:12.780726  314472 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1019 12:23:12.788624  314472 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1019 12:23:12.796324  314472 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:23:12.796379  314472 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 12:23:12.803731  314472 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1019 12:23:12.811592  314472 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:23:12.811653  314472 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 12:23:12.818940  314472 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1019 12:23:12.826444  314472 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:23:12.826527  314472 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 12:23:12.834103  314472 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 12:23:12.842242  314472 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 12:23:12.888663  314472 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 12:23:14.651062  314472 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.762375153s)
	I1019 12:23:14.651120  314472 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1019 12:23:14.860796  314472 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 12:23:14.943999  314472 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1019 12:23:15.024960  314472 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:23:15.025042  314472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:23:15.525943  314472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:23:16.025636  314472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:23:16.047420  314472 api_server.go:72] duration metric: took 1.022466909s to wait for apiserver process to appear ...
	I1019 12:23:16.047436  314472 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:23:16.047458  314472 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1019 12:23:19.719600  314472 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1019 12:23:19.719616  314472 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1019 12:23:19.719628  314472 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1019 12:23:19.797759  314472 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1019 12:23:19.797786  314472 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1019 12:23:20.048257  314472 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1019 12:23:20.068964  314472 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:23:20.068979  314472 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:23:20.548228  314472 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1019 12:23:20.561939  314472 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:23:20.561955  314472 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:23:21.048098  314472 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1019 12:23:21.064305  314472 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:23:21.064323  314472 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:23:21.547563  314472 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1019 12:23:21.555963  314472 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1019 12:23:21.569973  314472 api_server.go:141] control plane version: v1.34.1
	I1019 12:23:21.569989  314472 api_server.go:131] duration metric: took 5.5225488s to wait for apiserver health ...
	I1019 12:23:21.569997  314472 cni.go:84] Creating CNI manager for ""
	I1019 12:23:21.570002  314472 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:23:21.573582  314472 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 12:23:21.576660  314472 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 12:23:21.581034  314472 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 12:23:21.581045  314472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 12:23:21.595864  314472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 12:23:22.106422  314472 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:23:22.109956  314472 system_pods.go:59] 8 kube-system pods found
	I1019 12:23:22.109977  314472 system_pods.go:61] "coredns-66bc5c9577-6fhln" [27a7676a-4f4f-482d-a02e-f53d91afa6f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:23:22.109984  314472 system_pods.go:61] "etcd-functional-970848" [054721fa-aa3b-471d-a02c-19a67d21f9bb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:23:22.109989  314472 system_pods.go:61] "kindnet-r24r7" [4e210973-4a20-479c-8e67-568831748dd6] Running
	I1019 12:23:22.109995  314472 system_pods.go:61] "kube-apiserver-functional-970848" [79533b42-51aa-4f63-ab17-9087907e4712] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:23:22.110001  314472 system_pods.go:61] "kube-controller-manager-functional-970848" [ee147173-01f8-4691-910e-dfcde5aeef7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:23:22.110006  314472 system_pods.go:61] "kube-proxy-bnjx8" [f57eb822-8a3e-4156-8249-292cf44a6233] Running
	I1019 12:23:22.110013  314472 system_pods.go:61] "kube-scheduler-functional-970848" [2e14799d-2f7c-440c-ae92-95d08a9b3694] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:23:22.110016  314472 system_pods.go:61] "storage-provisioner" [c2a46477-d3c8-4ad4-b914-907210b2389c] Running
	I1019 12:23:22.110022  314472 system_pods.go:74] duration metric: took 3.589121ms to wait for pod list to return data ...
	I1019 12:23:22.110028  314472 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:23:22.112551  314472 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 12:23:22.112568  314472 node_conditions.go:123] node cpu capacity is 2
	I1019 12:23:22.112578  314472 node_conditions.go:105] duration metric: took 2.54621ms to run NodePressure ...
	I1019 12:23:22.112635  314472 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 12:23:22.363817  314472 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1019 12:23:22.367263  314472 kubeadm.go:743] kubelet initialised
	I1019 12:23:22.367274  314472 kubeadm.go:744] duration metric: took 3.444143ms waiting for restarted kubelet to initialise ...
	I1019 12:23:22.367288  314472 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 12:23:22.376589  314472 ops.go:34] apiserver oom_adj: -16
	I1019 12:23:22.376614  314472 kubeadm.go:601] duration metric: took 9.860281224s to restartPrimaryControlPlane
	I1019 12:23:22.376622  314472 kubeadm.go:402] duration metric: took 9.913133545s to StartCluster
	I1019 12:23:22.376636  314472 settings.go:142] acquiring lock: {Name:mk1099ab6cbf86eca031b5f8e2b43952c9c0f84f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:23:22.376719  314472 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 12:23:22.377464  314472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:23:22.377766  314472 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:23:22.377964  314472 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:23:22.378088  314472 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:23:22.378155  314472 addons.go:69] Setting storage-provisioner=true in profile "functional-970848"
	I1019 12:23:22.378167  314472 addons.go:238] Setting addon storage-provisioner=true in "functional-970848"
	W1019 12:23:22.378172  314472 addons.go:247] addon storage-provisioner should already be in state true
	I1019 12:23:22.378196  314472 host.go:66] Checking if "functional-970848" exists ...
	I1019 12:23:22.378374  314472 addons.go:69] Setting default-storageclass=true in profile "functional-970848"
	I1019 12:23:22.378385  314472 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-970848"
	I1019 12:23:22.378663  314472 cli_runner.go:164] Run: docker container inspect functional-970848 --format={{.State.Status}}
	I1019 12:23:22.378708  314472 cli_runner.go:164] Run: docker container inspect functional-970848 --format={{.State.Status}}
	I1019 12:23:22.383920  314472 out.go:179] * Verifying Kubernetes components...
	I1019 12:23:22.388650  314472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:23:22.415197  314472 addons.go:238] Setting addon default-storageclass=true in "functional-970848"
	W1019 12:23:22.415207  314472 addons.go:247] addon default-storageclass should already be in state true
	I1019 12:23:22.415230  314472 host.go:66] Checking if "functional-970848" exists ...
	I1019 12:23:22.415624  314472 cli_runner.go:164] Run: docker container inspect functional-970848 --format={{.State.Status}}
	I1019 12:23:22.419664  314472 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:23:22.422572  314472 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:23:22.422583  314472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:23:22.422678  314472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970848
	I1019 12:23:22.436867  314472 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:23:22.436879  314472 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:23:22.436947  314472 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970848
	I1019 12:23:22.459175  314472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/functional-970848/id_rsa Username:docker}
	I1019 12:23:22.483228  314472 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/functional-970848/id_rsa Username:docker}
	I1019 12:23:22.595059  314472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:23:22.619080  314472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:23:22.651406  314472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:23:23.484522  314472 node_ready.go:35] waiting up to 6m0s for node "functional-970848" to be "Ready" ...
	I1019 12:23:23.489374  314472 node_ready.go:49] node "functional-970848" is "Ready"
	I1019 12:23:23.489389  314472 node_ready.go:38] duration metric: took 4.847936ms for node "functional-970848" to be "Ready" ...
	I1019 12:23:23.489400  314472 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:23:23.489457  314472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:23:23.495795  314472 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 12:23:23.498519  314472 addons.go:514] duration metric: took 1.120422383s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 12:23:23.502805  314472 api_server.go:72] duration metric: took 1.124798075s to wait for apiserver process to appear ...
	I1019 12:23:23.502827  314472 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:23:23.502846  314472 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1019 12:23:23.512030  314472 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1019 12:23:23.512998  314472 api_server.go:141] control plane version: v1.34.1
	I1019 12:23:23.513012  314472 api_server.go:131] duration metric: took 10.178555ms to wait for apiserver health ...
	I1019 12:23:23.513019  314472 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:23:23.516342  314472 system_pods.go:59] 8 kube-system pods found
	I1019 12:23:23.516361  314472 system_pods.go:61] "coredns-66bc5c9577-6fhln" [27a7676a-4f4f-482d-a02e-f53d91afa6f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:23:23.516368  314472 system_pods.go:61] "etcd-functional-970848" [054721fa-aa3b-471d-a02c-19a67d21f9bb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:23:23.516373  314472 system_pods.go:61] "kindnet-r24r7" [4e210973-4a20-479c-8e67-568831748dd6] Running
	I1019 12:23:23.516379  314472 system_pods.go:61] "kube-apiserver-functional-970848" [79533b42-51aa-4f63-ab17-9087907e4712] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:23:23.516384  314472 system_pods.go:61] "kube-controller-manager-functional-970848" [ee147173-01f8-4691-910e-dfcde5aeef7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:23:23.516388  314472 system_pods.go:61] "kube-proxy-bnjx8" [f57eb822-8a3e-4156-8249-292cf44a6233] Running
	I1019 12:23:23.516393  314472 system_pods.go:61] "kube-scheduler-functional-970848" [2e14799d-2f7c-440c-ae92-95d08a9b3694] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:23:23.516399  314472 system_pods.go:61] "storage-provisioner" [c2a46477-d3c8-4ad4-b914-907210b2389c] Running
	I1019 12:23:23.516406  314472 system_pods.go:74] duration metric: took 3.381676ms to wait for pod list to return data ...
	I1019 12:23:23.516412  314472 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:23:23.519099  314472 default_sa.go:45] found service account: "default"
	I1019 12:23:23.519112  314472 default_sa.go:55] duration metric: took 2.695825ms for default service account to be created ...
	I1019 12:23:23.519120  314472 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:23:23.522169  314472 system_pods.go:86] 8 kube-system pods found
	I1019 12:23:23.522188  314472 system_pods.go:89] "coredns-66bc5c9577-6fhln" [27a7676a-4f4f-482d-a02e-f53d91afa6f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:23:23.522195  314472 system_pods.go:89] "etcd-functional-970848" [054721fa-aa3b-471d-a02c-19a67d21f9bb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:23:23.522199  314472 system_pods.go:89] "kindnet-r24r7" [4e210973-4a20-479c-8e67-568831748dd6] Running
	I1019 12:23:23.522205  314472 system_pods.go:89] "kube-apiserver-functional-970848" [79533b42-51aa-4f63-ab17-9087907e4712] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:23:23.522210  314472 system_pods.go:89] "kube-controller-manager-functional-970848" [ee147173-01f8-4691-910e-dfcde5aeef7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:23:23.522213  314472 system_pods.go:89] "kube-proxy-bnjx8" [f57eb822-8a3e-4156-8249-292cf44a6233] Running
	I1019 12:23:23.522219  314472 system_pods.go:89] "kube-scheduler-functional-970848" [2e14799d-2f7c-440c-ae92-95d08a9b3694] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:23:23.522221  314472 system_pods.go:89] "storage-provisioner" [c2a46477-d3c8-4ad4-b914-907210b2389c] Running
	I1019 12:23:23.522227  314472 system_pods.go:126] duration metric: took 3.103247ms to wait for k8s-apps to be running ...
	I1019 12:23:23.522234  314472 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:23:23.522292  314472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:23:23.537124  314472 system_svc.go:56] duration metric: took 14.880775ms WaitForService to wait for kubelet
	I1019 12:23:23.537142  314472 kubeadm.go:586] duration metric: took 1.15913879s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:23:23.537158  314472 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:23:23.540581  314472 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 12:23:23.540596  314472 node_conditions.go:123] node cpu capacity is 2
	I1019 12:23:23.540606  314472 node_conditions.go:105] duration metric: took 3.444224ms to run NodePressure ...
	I1019 12:23:23.540618  314472 start.go:241] waiting for startup goroutines ...
	I1019 12:23:23.540625  314472 start.go:246] waiting for cluster config update ...
	I1019 12:23:23.540634  314472 start.go:255] writing updated cluster config ...
	I1019 12:23:23.540935  314472 ssh_runner.go:195] Run: rm -f paused
	I1019 12:23:23.544534  314472 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:23:23.548027  314472 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6fhln" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 12:23:25.554097  314472 pod_ready.go:104] pod "coredns-66bc5c9577-6fhln" is not "Ready", error: <nil>
	I1019 12:23:27.553260  314472 pod_ready.go:94] pod "coredns-66bc5c9577-6fhln" is "Ready"
	I1019 12:23:27.553275  314472 pod_ready.go:86] duration metric: took 4.005234495s for pod "coredns-66bc5c9577-6fhln" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:23:27.556248  314472 pod_ready.go:83] waiting for pod "etcd-functional-970848" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 12:23:29.566742  314472 pod_ready.go:104] pod "etcd-functional-970848" is not "Ready", error: <nil>
	I1019 12:23:31.061427  314472 pod_ready.go:94] pod "etcd-functional-970848" is "Ready"
	I1019 12:23:31.061443  314472 pod_ready.go:86] duration metric: took 3.505181466s for pod "etcd-functional-970848" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:23:31.064121  314472 pod_ready.go:83] waiting for pod "kube-apiserver-functional-970848" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:23:31.069407  314472 pod_ready.go:94] pod "kube-apiserver-functional-970848" is "Ready"
	I1019 12:23:31.069422  314472 pod_ready.go:86] duration metric: took 5.285924ms for pod "kube-apiserver-functional-970848" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:23:31.072255  314472 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-970848" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:23:32.082621  314472 pod_ready.go:94] pod "kube-controller-manager-functional-970848" is "Ready"
	I1019 12:23:32.082637  314472 pod_ready.go:86] duration metric: took 1.010368109s for pod "kube-controller-manager-functional-970848" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:23:32.085798  314472 pod_ready.go:83] waiting for pod "kube-proxy-bnjx8" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:23:32.091450  314472 pod_ready.go:94] pod "kube-proxy-bnjx8" is "Ready"
	I1019 12:23:32.091465  314472 pod_ready.go:86] duration metric: took 5.654094ms for pod "kube-proxy-bnjx8" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:23:32.259779  314472 pod_ready.go:83] waiting for pod "kube-scheduler-functional-970848" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:23:32.659960  314472 pod_ready.go:94] pod "kube-scheduler-functional-970848" is "Ready"
	I1019 12:23:32.659973  314472 pod_ready.go:86] duration metric: took 400.182158ms for pod "kube-scheduler-functional-970848" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:23:32.659983  314472 pod_ready.go:40] duration metric: took 9.115430128s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:23:32.715209  314472 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 12:23:32.718224  314472 out.go:179] * Done! kubectl is now configured to use "functional-970848" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 12:24:15 functional-970848 crio[3503]: time="2025-10-19T12:24:15.186522672Z" level=info msg="Stopping pod sandbox: 5078fa3260426eb76579bfa74461978834945a1469349e9374573fbef9f542db" id=2c0fb9d1-9155-4666-84a9-44686297ea1f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 12:24:15 functional-970848 crio[3503]: time="2025-10-19T12:24:15.186572052Z" level=info msg="Stopped pod sandbox (already stopped): 5078fa3260426eb76579bfa74461978834945a1469349e9374573fbef9f542db" id=2c0fb9d1-9155-4666-84a9-44686297ea1f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 12:24:15 functional-970848 crio[3503]: time="2025-10-19T12:24:15.186875096Z" level=info msg="Removing pod sandbox: 5078fa3260426eb76579bfa74461978834945a1469349e9374573fbef9f542db" id=6ad5ff88-4cb4-4f1a-b0ec-3a7b21425ab3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 19 12:24:15 functional-970848 crio[3503]: time="2025-10-19T12:24:15.190609245Z" level=info msg="Removed pod sandbox: 5078fa3260426eb76579bfa74461978834945a1469349e9374573fbef9f542db" id=6ad5ff88-4cb4-4f1a-b0ec-3a7b21425ab3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 19 12:24:15 functional-970848 crio[3503]: time="2025-10-19T12:24:15.191240211Z" level=info msg="Stopping pod sandbox: 2135d5dbe20c17b090d03b46f6303a5107e092c245126767bc410362047775dd" id=68d6b167-7c5d-4337-adb5-82490887a5ee name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 12:24:15 functional-970848 crio[3503]: time="2025-10-19T12:24:15.19129677Z" level=info msg="Stopped pod sandbox (already stopped): 2135d5dbe20c17b090d03b46f6303a5107e092c245126767bc410362047775dd" id=68d6b167-7c5d-4337-adb5-82490887a5ee name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 12:24:15 functional-970848 crio[3503]: time="2025-10-19T12:24:15.19164633Z" level=info msg="Removing pod sandbox: 2135d5dbe20c17b090d03b46f6303a5107e092c245126767bc410362047775dd" id=fc348be9-f534-41c1-8df6-b84641d3f195 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 19 12:24:15 functional-970848 crio[3503]: time="2025-10-19T12:24:15.197165095Z" level=info msg="Removed pod sandbox: 2135d5dbe20c17b090d03b46f6303a5107e092c245126767bc410362047775dd" id=fc348be9-f534-41c1-8df6-b84641d3f195 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 19 12:24:16 functional-970848 crio[3503]: time="2025-10-19T12:24:16.440272876Z" level=info msg="Running pod sandbox: default/hello-node-75c85bcc94-9km4z/POD" id=9341d715-adf5-4fc2-b8b4-44963e7af054 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:24:16 functional-970848 crio[3503]: time="2025-10-19T12:24:16.440345164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:24:16 functional-970848 crio[3503]: time="2025-10-19T12:24:16.445528564Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-9km4z Namespace:default ID:f92a38b507e8066db9beb86f4ba4a3069854ee9b79aba5b59d4779200a8baa48 UID:59eaa957-fac9-439c-854a-65af97d8aeb4 NetNS:/var/run/netns/6e779f6a-2b89-4b26-91b6-b9a23db452eb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cca0}] Aliases:map[]}"
	Oct 19 12:24:16 functional-970848 crio[3503]: time="2025-10-19T12:24:16.445701875Z" level=info msg="Adding pod default_hello-node-75c85bcc94-9km4z to CNI network \"kindnet\" (type=ptp)"
	Oct 19 12:24:16 functional-970848 crio[3503]: time="2025-10-19T12:24:16.456414632Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-9km4z Namespace:default ID:f92a38b507e8066db9beb86f4ba4a3069854ee9b79aba5b59d4779200a8baa48 UID:59eaa957-fac9-439c-854a-65af97d8aeb4 NetNS:/var/run/netns/6e779f6a-2b89-4b26-91b6-b9a23db452eb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cca0}] Aliases:map[]}"
	Oct 19 12:24:16 functional-970848 crio[3503]: time="2025-10-19T12:24:16.456610615Z" level=info msg="Checking pod default_hello-node-75c85bcc94-9km4z for CNI network kindnet (type=ptp)"
	Oct 19 12:24:16 functional-970848 crio[3503]: time="2025-10-19T12:24:16.45970619Z" level=info msg="Ran pod sandbox f92a38b507e8066db9beb86f4ba4a3069854ee9b79aba5b59d4779200a8baa48 with infra container: default/hello-node-75c85bcc94-9km4z/POD" id=9341d715-adf5-4fc2-b8b4-44963e7af054 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:24:16 functional-970848 crio[3503]: time="2025-10-19T12:24:16.46237488Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8d17697e-cfd2-4c79-9292-5481504584aa name=/runtime.v1.ImageService/PullImage
	Oct 19 12:24:32 functional-970848 crio[3503]: time="2025-10-19T12:24:32.055492226Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e70a3432-6090-4c2b-8d8a-f7b55ecd30a1 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:24:41 functional-970848 crio[3503]: time="2025-10-19T12:24:41.054797975Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cdc4e12a-b963-42fe-8fd0-652731991757 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:25:01 functional-970848 crio[3503]: time="2025-10-19T12:25:01.055187717Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2897fa5e-1f51-4ec0-820d-aa23a7393557 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:25:34 functional-970848 crio[3503]: time="2025-10-19T12:25:34.054976652Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=62bc6810-2aa3-4864-aa59-a173422fdc10 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:25:48 functional-970848 crio[3503]: time="2025-10-19T12:25:48.055145948Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=590a777f-b645-4a49-acac-d5fc0a5a3cc4 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:27:04 functional-970848 crio[3503]: time="2025-10-19T12:27:04.054718373Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4f0e7c9d-22b0-42b8-aba1-b0822f04eeda name=/runtime.v1.ImageService/PullImage
	Oct 19 12:27:17 functional-970848 crio[3503]: time="2025-10-19T12:27:17.055519639Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=029a3524-578a-43d6-852a-ce4ce6bcbe80 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:29:58 functional-970848 crio[3503]: time="2025-10-19T12:29:58.055552029Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=284e1b55-b42c-4105-b5c5-697e463933bc name=/runtime.v1.ImageService/PullImage
	Oct 19 12:30:02 functional-970848 crio[3503]: time="2025-10-19T12:30:02.055446072Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1dbdfdb4-b75b-42d8-89a7-e9c457f964ff name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	6dc8a60c27792       docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a   9 minutes ago       Running             myfrontend                0                   c637bcc58eae8       sp-pod                                      default
	163d1cf74bd52       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0   10 minutes ago      Running             nginx                     0                   faab7e07714d4       nginx-svc                                   default
	4905e743bb30f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   7b01fa602a0ad       storage-provisioner                         kube-system
	fc4c88a0e05b0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   9de8b70a33024       kindnet-r24r7                               kube-system
	7087b3a270215       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   bf8cec1e16c82       kube-proxy-bnjx8                            kube-system
	8f5dda49b6014       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   0c6d1844e347f       coredns-66bc5c9577-6fhln                    kube-system
	5a5ab3cceaccf       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   82b79315e6c97       kube-apiserver-functional-970848            kube-system
	f3e0bdd5b65cb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   9753b2b1ee9f7       kube-controller-manager-functional-970848   kube-system
	31c32b1b59faa       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   2b6fcd0f23fa7       kube-scheduler-functional-970848            kube-system
	81a4d5461c478       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   8e3fccb829351       etcd-functional-970848                      kube-system
	ece0b55d642c5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   0c6d1844e347f       coredns-66bc5c9577-6fhln                    kube-system
	f4d9b8fcbb05c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   7b01fa602a0ad       storage-provisioner                         kube-system
	fe862a7ff67b8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   bf8cec1e16c82       kube-proxy-bnjx8                            kube-system
	9b3e546325149       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   8e3fccb829351       etcd-functional-970848                      kube-system
	d9ff8803bdbd6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   9de8b70a33024       kindnet-r24r7                               kube-system
	11491746337bc       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   9753b2b1ee9f7       kube-controller-manager-functional-970848   kube-system
	c8483ff3ddc4f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   2b6fcd0f23fa7       kube-scheduler-functional-970848            kube-system
	
	
	==> coredns [8f5dda49b60141565bf69bf917d805be7cb429f5bfc2ce8e7867c80367320d7b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49237 - 18901 "HINFO IN 4538011340678805729.8193834661964877184. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02084991s
	
	
	==> coredns [ece0b55d642c57ae4069803e77b1922a35efeb6b781672ca574f56a28b1803b9] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52174 - 44393 "HINFO IN 3648010295357381997.5331174724084598467. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034173838s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-970848
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-970848
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=functional-970848
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_21_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:21:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-970848
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:33:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:33:42 +0000   Sun, 19 Oct 2025 12:21:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:33:42 +0000   Sun, 19 Oct 2025 12:21:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:33:42 +0000   Sun, 19 Oct 2025 12:21:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:33:42 +0000   Sun, 19 Oct 2025 12:22:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-970848
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                882d9d3f-71b4-4793-b291-9062c1f46fd2
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-9km4z                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m43s
	  default                     hello-node-connect-7d85dfc575-tvhcp          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 coredns-66bc5c9577-6fhln                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-970848                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-r24r7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-970848             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-970848    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-bnjx8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-970848             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-970848 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-970848 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-970848 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-970848 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-970848 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-970848 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           12m                node-controller  Node functional-970848 event: Registered Node functional-970848 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-970848 status is now: NodeReady
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                node-controller  Node functional-970848 event: Registered Node functional-970848 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-970848 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-970848 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-970848 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-970848 event: Registered Node functional-970848 in Controller
	
	
	==> dmesg <==
	[Oct19 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015448] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.491491] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034667] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806219] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.239480] kauditd_printk_skb: 36 callbacks suppressed
	[Oct19 11:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct19 11:24] hrtimer: interrupt took 38365015 ns
	[Oct19 12:12] kauditd_printk_skb: 8 callbacks suppressed
	[Oct19 12:14] overlayfs: idmapped layers are currently not supported
	[  +0.068862] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct19 12:20] overlayfs: idmapped layers are currently not supported
	[Oct19 12:21] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [81a4d5461c478bac08e6d52cd6b4bb3cb8cde9d3bee774440b43630dee88e6c7] <==
	{"level":"warn","ts":"2025-10-19T12:23:17.990339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.016374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.019735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.040358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.063579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.077744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.123885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.154542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.174065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.221835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.253511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.281746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.338287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.361101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.395718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.411690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.431968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.449164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.501916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.529024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.546536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:23:18.714491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41866","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T12:33:16.869262Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1123}
	{"level":"info","ts":"2025-10-19T12:33:16.892919Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1123,"took":"23.288177ms","hash":3345834740,"current-db-size-bytes":3227648,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1425408,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-19T12:33:16.892969Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3345834740,"revision":1123,"compact-revision":-1}
	
	
	==> etcd [9b3e546325149901014206e697c01cfa4ddabed75b785f3d8083cf7dcfccbd8a] <==
	{"level":"warn","ts":"2025-10-19T12:22:34.667693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:22:34.691728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:22:34.726929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:22:34.749457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:22:34.779200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:22:34.789744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:22:34.885878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41008","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T12:22:58.389270Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-19T12:22:58.389324Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-970848","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-19T12:22:58.389416Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T12:22:58.541297Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T12:22:58.542769Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T12:22:58.542816Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-19T12:22:58.542885Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-19T12:22:58.542897Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-19T12:22:58.542878Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T12:22:58.542921Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T12:22:58.542931Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-19T12:22:58.542986Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T12:22:58.542998Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T12:22:58.543004Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T12:22:58.546692Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-19T12:22:58.546780Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T12:22:58.546814Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-19T12:22:58.546823Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-970848","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 12:33:59 up  2:16,  0 user,  load average: 0.28, 0.49, 1.59
	Linux functional-970848 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d9ff8803bdbd60a2709366efaed595db6272eb55e7d3fb922872e3f46025119e] <==
	I1019 12:22:31.929973       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:22:31.930353       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1019 12:22:31.931530       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:22:31.931545       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:22:31.931560       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:22:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:22:32.176756       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:22:32.181965       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:22:32.181995       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:22:32.182503       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 12:22:36.082528       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:22:36.082645       1 metrics.go:72] Registering metrics
	I1019 12:22:36.082892       1 controller.go:711] "Syncing nftables rules"
	I1019 12:22:42.169959       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:22:42.170060       1 main.go:301] handling current node
	I1019 12:22:52.171543       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:22:52.171664       1 main.go:301] handling current node
	
	
	==> kindnet [fc4c88a0e05b0f8a2637622071911dec5ffb308b1e606b2c9293dc0f005e5c74] <==
	I1019 12:31:50.838130       1 main.go:301] handling current node
	I1019 12:32:00.843578       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:32:00.843680       1 main.go:301] handling current node
	I1019 12:32:10.843554       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:32:10.843658       1 main.go:301] handling current node
	I1019 12:32:20.837231       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:32:20.837333       1 main.go:301] handling current node
	I1019 12:32:30.837429       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:32:30.837566       1 main.go:301] handling current node
	I1019 12:32:40.841759       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:32:40.841790       1 main.go:301] handling current node
	I1019 12:32:50.837702       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:32:50.837740       1 main.go:301] handling current node
	I1019 12:33:00.841768       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:33:00.841806       1 main.go:301] handling current node
	I1019 12:33:10.841746       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:33:10.841847       1 main.go:301] handling current node
	I1019 12:33:20.846220       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:33:20.846330       1 main.go:301] handling current node
	I1019 12:33:30.838935       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:33:30.838973       1 main.go:301] handling current node
	I1019 12:33:40.844193       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:33:40.844292       1 main.go:301] handling current node
	I1019 12:33:50.843934       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:33:50.844043       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5a5ab3cceaccfbfd88b759940a24758d257a84b4487c9b95693c7fc20a9b2359] <==
	I1019 12:23:19.942143       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 12:23:19.970209       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:23:19.981782       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1019 12:23:19.982151       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 12:23:19.982269       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 12:23:19.995047       1 cache.go:39] Caches are synced for autoregister controller
	I1019 12:23:20.006851       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 12:23:20.165615       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 12:23:20.579691       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:23:22.097622       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 12:23:22.213063       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 12:23:22.286338       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:23:22.296086       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:23:23.345502       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 12:23:23.396365       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 12:23:23.551616       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 12:23:36.053961       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.20.65"}
	E1019 12:23:40.277413       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1019 12:23:44.289753       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.159.183"}
	I1019 12:23:57.670254       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.62.239"}
	E1019 12:24:07.110376       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:40488: use of closed network connection
	E1019 12:24:07.717535       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1019 12:24:16.004945       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45550: use of closed network connection
	I1019 12:24:16.204842       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.251.33"}
	I1019 12:33:19.854485       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [11491746337bc36377f826d2c942ba20cf5d8915daa08ccc7282bae2d6f46809] <==
	I1019 12:22:39.383202       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 12:22:39.383832       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 12:22:39.385772       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 12:22:39.387681       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 12:22:39.390738       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 12:22:39.405099       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:22:39.412793       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 12:22:39.412875       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 12:22:39.412926       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 12:22:39.412968       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 12:22:39.413058       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 12:22:39.413152       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 12:22:39.413197       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 12:22:39.413246       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 12:22:39.412810       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 12:22:39.412948       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 12:22:39.413389       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-970848"
	I1019 12:22:39.413452       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 12:22:39.414137       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 12:22:39.414212       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 12:22:39.414261       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 12:22:39.414289       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 12:22:39.414316       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 12:22:39.417752       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 12:22:39.424302       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [f3e0bdd5b65cb0ca90a5b3d52b11d5b65a9e01602241d4c48248a650554ee267] <==
	I1019 12:23:23.274326       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 12:23:23.274436       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 12:23:23.274513       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-970848"
	I1019 12:23:23.274550       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 12:23:23.276509       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 12:23:23.276527       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 12:23:23.279317       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 12:23:23.279378       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 12:23:23.285666       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:23:23.285715       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 12:23:23.285723       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 12:23:23.288359       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 12:23:23.290339       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 12:23:23.290460       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 12:23:23.290976       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 12:23:23.291005       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 12:23:23.291067       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 12:23:23.291111       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 12:23:23.291135       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 12:23:23.291154       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 12:23:23.291569       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 12:23:23.293535       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 12:23:23.294214       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 12:23:23.301368       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 12:23:23.308704       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [7087b3a270215bce7ed5e2607586a7f252ac83a77d793891b7e3a70f66db3608] <==
	I1019 12:23:20.772870       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:23:20.964440       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:23:21.170117       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:23:21.170264       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 12:23:21.170583       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:23:21.303922       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:23:21.304050       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:23:21.319751       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:23:21.320041       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:23:21.320065       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:23:21.321120       1 config.go:200] "Starting service config controller"
	I1019 12:23:21.321176       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:23:21.325531       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:23:21.325552       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:23:21.325570       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:23:21.325574       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:23:21.326262       1 config.go:309] "Starting node config controller"
	I1019 12:23:21.326278       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:23:21.326286       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:23:21.422028       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 12:23:21.425754       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:23:21.425818       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [fe862a7ff67b82c14b83bdcdb5138af6dd686bd04ee9a6b61df9628c4ff06b22] <==
	I1019 12:22:35.650299       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:22:36.710034       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:22:36.883220       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:22:36.901756       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 12:22:36.901843       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:22:37.272406       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:22:37.272533       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:22:37.277023       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:22:37.277379       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:22:37.277599       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:22:37.279018       1 config.go:200] "Starting service config controller"
	I1019 12:22:37.279092       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:22:37.279138       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:22:37.279165       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:22:37.279212       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:22:37.279239       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:22:37.279861       1 config.go:309] "Starting node config controller"
	I1019 12:22:37.283162       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:22:37.283243       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:22:37.379992       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:22:37.380037       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 12:22:37.380089       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [31c32b1b59faa21ae357cf9815b30040535b75222eacbf1be47c8c58520b17ac] <==
	I1019 12:23:20.221141       1 serving.go:386] Generated self-signed cert in-memory
	I1019 12:23:21.763377       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 12:23:21.763489       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:23:21.769201       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 12:23:21.769391       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 12:23:21.769450       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 12:23:21.769500       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 12:23:21.771340       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:23:21.797893       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:23:21.795225       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:23:21.803712       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:23:21.871043       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 12:23:21.899033       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:23:21.903986       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [c8483ff3ddc4f6ec4b072517520542733cdb7c795c45d8a1fd79227388a5a433] <==
	I1019 12:22:34.768405       1 serving.go:386] Generated self-signed cert in-memory
	I1019 12:22:37.484305       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 12:22:37.484438       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:22:37.489758       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 12:22:37.489940       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 12:22:37.489904       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:22:37.489892       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:22:37.490082       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:22:37.489873       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 12:22:37.490126       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 12:22:37.490155       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:22:37.590186       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:22:37.590292       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 12:22:37.590426       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:22:58.388492       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1019 12:22:58.388521       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1019 12:22:58.388541       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1019 12:22:58.388568       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:22:58.388587       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1019 12:22:58.388604       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:22:58.400576       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1019 12:22:58.400733       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 19 12:31:30 functional-970848 kubelet[3820]: E1019 12:31:30.055110    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tvhcp" podUID="52e3bd49-6d1d-4049-9e06-2ff8ced33393"
	Oct 19 12:31:32 functional-970848 kubelet[3820]: E1019 12:31:32.054710    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9km4z" podUID="59eaa957-fac9-439c-854a-65af97d8aeb4"
	Oct 19 12:31:41 functional-970848 kubelet[3820]: E1019 12:31:41.054147    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tvhcp" podUID="52e3bd49-6d1d-4049-9e06-2ff8ced33393"
	Oct 19 12:31:46 functional-970848 kubelet[3820]: E1019 12:31:46.054844    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9km4z" podUID="59eaa957-fac9-439c-854a-65af97d8aeb4"
	Oct 19 12:31:55 functional-970848 kubelet[3820]: E1019 12:31:55.055502    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tvhcp" podUID="52e3bd49-6d1d-4049-9e06-2ff8ced33393"
	Oct 19 12:31:58 functional-970848 kubelet[3820]: E1019 12:31:58.054887    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9km4z" podUID="59eaa957-fac9-439c-854a-65af97d8aeb4"
	Oct 19 12:32:06 functional-970848 kubelet[3820]: E1019 12:32:06.054803    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tvhcp" podUID="52e3bd49-6d1d-4049-9e06-2ff8ced33393"
	Oct 19 12:32:09 functional-970848 kubelet[3820]: E1019 12:32:09.054588    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9km4z" podUID="59eaa957-fac9-439c-854a-65af97d8aeb4"
	Oct 19 12:32:18 functional-970848 kubelet[3820]: E1019 12:32:18.055232    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tvhcp" podUID="52e3bd49-6d1d-4049-9e06-2ff8ced33393"
	Oct 19 12:32:20 functional-970848 kubelet[3820]: E1019 12:32:20.055151    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9km4z" podUID="59eaa957-fac9-439c-854a-65af97d8aeb4"
	Oct 19 12:32:32 functional-970848 kubelet[3820]: E1019 12:32:32.055077    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tvhcp" podUID="52e3bd49-6d1d-4049-9e06-2ff8ced33393"
	Oct 19 12:32:33 functional-970848 kubelet[3820]: E1019 12:32:33.054161    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9km4z" podUID="59eaa957-fac9-439c-854a-65af97d8aeb4"
	Oct 19 12:32:46 functional-970848 kubelet[3820]: E1019 12:32:46.054057    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9km4z" podUID="59eaa957-fac9-439c-854a-65af97d8aeb4"
	Oct 19 12:32:47 functional-970848 kubelet[3820]: E1019 12:32:47.054832    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tvhcp" podUID="52e3bd49-6d1d-4049-9e06-2ff8ced33393"
	Oct 19 12:32:57 functional-970848 kubelet[3820]: E1019 12:32:57.054935    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9km4z" podUID="59eaa957-fac9-439c-854a-65af97d8aeb4"
	Oct 19 12:32:59 functional-970848 kubelet[3820]: E1019 12:32:59.055730    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tvhcp" podUID="52e3bd49-6d1d-4049-9e06-2ff8ced33393"
	Oct 19 12:33:09 functional-970848 kubelet[3820]: E1019 12:33:09.055972    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9km4z" podUID="59eaa957-fac9-439c-854a-65af97d8aeb4"
	Oct 19 12:33:13 functional-970848 kubelet[3820]: E1019 12:33:13.055265    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tvhcp" podUID="52e3bd49-6d1d-4049-9e06-2ff8ced33393"
	Oct 19 12:33:24 functional-970848 kubelet[3820]: E1019 12:33:24.054121    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9km4z" podUID="59eaa957-fac9-439c-854a-65af97d8aeb4"
	Oct 19 12:33:24 functional-970848 kubelet[3820]: E1019 12:33:24.054260    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tvhcp" podUID="52e3bd49-6d1d-4049-9e06-2ff8ced33393"
	Oct 19 12:33:37 functional-970848 kubelet[3820]: E1019 12:33:37.054716    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tvhcp" podUID="52e3bd49-6d1d-4049-9e06-2ff8ced33393"
	Oct 19 12:33:38 functional-970848 kubelet[3820]: E1019 12:33:38.055171    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9km4z" podUID="59eaa957-fac9-439c-854a-65af97d8aeb4"
	Oct 19 12:33:48 functional-970848 kubelet[3820]: E1019 12:33:48.054940    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tvhcp" podUID="52e3bd49-6d1d-4049-9e06-2ff8ced33393"
	Oct 19 12:33:53 functional-970848 kubelet[3820]: E1019 12:33:53.054258    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9km4z" podUID="59eaa957-fac9-439c-854a-65af97d8aeb4"
	Oct 19 12:33:59 functional-970848 kubelet[3820]: E1019 12:33:59.055006    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-tvhcp" podUID="52e3bd49-6d1d-4049-9e06-2ff8ced33393"
	
	
	==> storage-provisioner [4905e743bb30fadff8f15d8a1a47d46516d16e981d649e4f18d85eb76007048d] <==
	W1019 12:33:34.908322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:36.910980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:36.915328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:38.918254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:38.925093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:40.927914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:40.932441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:42.935171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:42.939407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:44.942843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:44.949567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:46.952282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:46.956675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:48.959972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:48.964239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:50.967231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:50.973526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:52.976167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:52.982650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:54.986546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:54.990818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:56.993498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:56.998017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:59.001958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:33:59.009106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f4d9b8fcbb05c778bc4810a4c4a38f9a8aad2177f4d9855b996744550ff65802] <==
	I1019 12:22:33.126873       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 12:22:36.126971       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 12:22:36.127026       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 12:22:36.228033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:22:39.701830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:22:43.962487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:22:47.560863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:22:50.614815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:22:53.636867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:22:53.642159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:22:53.642306       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 12:22:53.642485       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-970848_e2cf2ff4-ffc3-4945-a9f9-26a7fc276337!
	I1019 12:22:53.644108       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a11023e-1c6d-4720-85a8-723a99ff018d", APIVersion:"v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-970848_e2cf2ff4-ffc3-4945-a9f9-26a7fc276337 became leader
	W1019 12:22:53.649974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:22:53.653444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:22:53.742898       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-970848_e2cf2ff4-ffc3-4945-a9f9-26a7fc276337!
	W1019 12:22:55.656517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:22:55.661242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:22:57.665099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:22:57.669956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-970848 -n functional-970848
helpers_test.go:269: (dbg) Run:  kubectl --context functional-970848 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-9km4z hello-node-connect-7d85dfc575-tvhcp
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-970848 describe pod hello-node-75c85bcc94-9km4z hello-node-connect-7d85dfc575-tvhcp
helpers_test.go:290: (dbg) kubectl --context functional-970848 describe pod hello-node-75c85bcc94-9km4z hello-node-connect-7d85dfc575-tvhcp:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-9km4z
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-970848/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 12:24:16 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lx5zg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lx5zg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m45s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-9km4z to functional-970848
	  Normal   Pulling    6m44s (x5 over 9m45s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m44s (x5 over 9m45s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m44s (x5 over 9m45s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m34s (x21 over 9m44s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m34s (x21 over 9m44s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-tvhcp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-970848/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 12:23:57 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jcglx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jcglx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-tvhcp to functional-970848
	  Normal   Pulling    6m57s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m57s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m57s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m (x20 over 10m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m45s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 image load --daemon kicbase/echo-server:functional-970848 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-970848" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 image load --daemon kicbase/echo-server:functional-970848 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-970848" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-970848
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 image load --daemon kicbase/echo-server:functional-970848 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-970848 image load --daemon kicbase/echo-server:functional-970848 --alsologtostderr: (1.064990681s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-970848" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 image save kicbase/echo-server:functional-970848 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1019 12:23:47.905370  318061 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:23:47.905588  318061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:23:47.905617  318061 out.go:374] Setting ErrFile to fd 2...
	I1019 12:23:47.905636  318061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:23:47.905960  318061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:23:47.906665  318061 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:23:47.906835  318061 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:23:47.907427  318061 cli_runner.go:164] Run: docker container inspect functional-970848 --format={{.State.Status}}
	I1019 12:23:47.929537  318061 ssh_runner.go:195] Run: systemctl --version
	I1019 12:23:47.929668  318061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970848
	I1019 12:23:47.946228  318061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/functional-970848/id_rsa Username:docker}
	I1019 12:23:48.048487  318061 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1019 12:23:48.048570  318061 cache_images.go:254] Failed to load cached images for "functional-970848": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1019 12:23:48.048589  318061 cache_images.go:266] failed pushing to: functional-970848

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-970848
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 image save --daemon kicbase/echo-server:functional-970848 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-970848
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-970848: exit status 1 (22.524574ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-970848

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-970848

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-970848 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-970848 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-9km4z" [59eaa957-fac9-439c-854a-65af97d8aeb4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1019 12:24:29.797783  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:26:45.933260  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:27:13.639129  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:31:45.933492  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-970848 -n functional-970848
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-19 12:34:16.664332521 +0000 UTC m=+1251.311152541
functional_test.go:1460: (dbg) Run:  kubectl --context functional-970848 describe po hello-node-75c85bcc94-9km4z -n default
functional_test.go:1460: (dbg) kubectl --context functional-970848 describe po hello-node-75c85bcc94-9km4z -n default:
Name:             hello-node-75c85bcc94-9km4z
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-970848/192.168.49.2
Start Time:       Sun, 19 Oct 2025 12:24:16 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lx5zg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lx5zg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-9km4z to functional-970848
Normal   Pulling    6m59s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m59s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m59s (x5 over 10m)     kubelet            Error: ErrImagePull
Normal   BackOff    4m49s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m49s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-970848 logs hello-node-75c85bcc94-9km4z -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-970848 logs hello-node-75c85bcc94-9km4z -n default: exit status 1 (133.946419ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-9km4z" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-970848 logs hello-node-75c85bcc94-9km4z -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970848 service --namespace=default --https --url hello-node: exit status 115 (551.320889ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30112
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-970848 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970848 service hello-node --url --format={{.IP}}: exit status 115 (437.683402ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-970848 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970848 service hello-node --url: exit status 115 (388.949102ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30112
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-970848 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30112
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.05s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-919847 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-919847 --output=json --user=testUser: exit status 80 (2.052566964s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f4bc0fc1-fc0b-4b01-a035-d5ff95d7f6db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-919847 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"95e26990-af23-48d8-86dd-d50f9436780f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-19T12:47:01Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"2a72443d-4cee-49f4-a484-c27bb595114d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-919847 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.05s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.41s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-919847 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-919847 --output=json --user=testUser: exit status 80 (1.406743405s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"806ba8f6-401f-4c98-bff9-0f8eaa7a7507","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-919847 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"2f2297ec-6a82-443d-971a-782f3a3048e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-19T12:47:02Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"3ddcd0e6-f911-4f9d-a754-8017a3b1c533","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-919847 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.41s)

                                                
                                    
x
+
TestPause/serial/Pause (8.44s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-052658 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-052658 --alsologtostderr -v=5: exit status 80 (2.48367117s)

                                                
                                                
-- stdout --
	* Pausing node pause-052658 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 13:05:35.104329  434973 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:05:35.107679  434973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:05:35.107709  434973 out.go:374] Setting ErrFile to fd 2...
	I1019 13:05:35.107718  434973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:05:35.108089  434973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:05:35.108479  434973 out.go:368] Setting JSON to false
	I1019 13:05:35.108644  434973 mustload.go:65] Loading cluster: pause-052658
	I1019 13:05:35.109253  434973 config.go:182] Loaded profile config "pause-052658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:05:35.109850  434973 cli_runner.go:164] Run: docker container inspect pause-052658 --format={{.State.Status}}
	I1019 13:05:35.138423  434973 host.go:66] Checking if "pause-052658" exists ...
	I1019 13:05:35.138742  434973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:05:35.241363  434973 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-19 13:05:35.226295398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:05:35.242040  434973 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-052658 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 13:05:35.245332  434973 out.go:179] * Pausing node pause-052658 ... 
	I1019 13:05:35.249232  434973 host.go:66] Checking if "pause-052658" exists ...
	I1019 13:05:35.249714  434973 ssh_runner.go:195] Run: systemctl --version
	I1019 13:05:35.249805  434973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-052658
	I1019 13:05:35.278906  434973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33343 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/pause-052658/id_rsa Username:docker}
	I1019 13:05:35.390437  434973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:05:35.408168  434973 pause.go:52] kubelet running: true
	I1019 13:05:35.408240  434973 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:05:35.704409  434973 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:05:35.704516  434973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:05:35.815635  434973 cri.go:89] found id: "c743f2b2cc5739d6f671d60d15ea27e27dfa0dc935153abf39b1ade383be12c8"
	I1019 13:05:35.815656  434973 cri.go:89] found id: "962513f0b7d745c9f24d3922de11904ff5dac0b2b94327d9b2481cfa5d29c246"
	I1019 13:05:35.815661  434973 cri.go:89] found id: "ba8f62ba490de8933a05b5c6dbf528ace592eba1da16695b3e24170c833da729"
	I1019 13:05:35.815665  434973 cri.go:89] found id: "d7ffff287898431d46f269ae1eba7808cb5fa242b40b83b1d32861a66655d7a8"
	I1019 13:05:35.815668  434973 cri.go:89] found id: "8bdbd9430d1867563c01e9db16c16d8bfc47dfbd4064de68b62e3c608fc7b2e8"
	I1019 13:05:35.815672  434973 cri.go:89] found id: "9036d93a9870e51a8553d29c237178734288ec8578cd01fe4a9d30733a29a989"
	I1019 13:05:35.815676  434973 cri.go:89] found id: "c87f85518ffefcd9ed464c1e8ec3f02cb34777237b1b757d35de45530e13d804"
	I1019 13:05:35.815679  434973 cri.go:89] found id: "9de130db3a61f28e3afc80c22ab1dcda87eb80e3e5cad06bbdf1723cbbc02659"
	I1019 13:05:35.815681  434973 cri.go:89] found id: "fa5349ebdab5aa344012950f607a1526ac8a79065f14d86c23329d96790f97a2"
	I1019 13:05:35.815689  434973 cri.go:89] found id: "5d464678ea1d810867398d806ef9ecea0b7e7e536a9ccd4a7598f0cb18a5d5e8"
	I1019 13:05:35.815692  434973 cri.go:89] found id: "bb49a02b287e654e3bf830c5ec876e1c796bfe354b6a4345250db63f8963a09b"
	I1019 13:05:35.815695  434973 cri.go:89] found id: "0e93a892e96f5ce20eb832477b72857cd295910746fafbd1f048bbf773aaaed1"
	I1019 13:05:35.815698  434973 cri.go:89] found id: "2f49b3722734ec5fa7cb1b7440bec821f2cfc59804041aba24306e9dcc504795"
	I1019 13:05:35.815701  434973 cri.go:89] found id: "d676f6db0dd2dacfd3bf4b36c2ba236c4e1cae0c8626d009575ea36888e03436"
	I1019 13:05:35.815704  434973 cri.go:89] found id: ""
	I1019 13:05:35.815754  434973 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:05:35.827709  434973 retry.go:31] will retry after 192.193001ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:05:35Z" level=error msg="open /run/runc: no such file or directory"
	I1019 13:05:36.020090  434973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:05:36.035045  434973 pause.go:52] kubelet running: false
	I1019 13:05:36.035113  434973 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:05:36.214178  434973 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:05:36.214257  434973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:05:36.335845  434973 cri.go:89] found id: "c743f2b2cc5739d6f671d60d15ea27e27dfa0dc935153abf39b1ade383be12c8"
	I1019 13:05:36.335866  434973 cri.go:89] found id: "962513f0b7d745c9f24d3922de11904ff5dac0b2b94327d9b2481cfa5d29c246"
	I1019 13:05:36.335870  434973 cri.go:89] found id: "ba8f62ba490de8933a05b5c6dbf528ace592eba1da16695b3e24170c833da729"
	I1019 13:05:36.335874  434973 cri.go:89] found id: "d7ffff287898431d46f269ae1eba7808cb5fa242b40b83b1d32861a66655d7a8"
	I1019 13:05:36.335877  434973 cri.go:89] found id: "8bdbd9430d1867563c01e9db16c16d8bfc47dfbd4064de68b62e3c608fc7b2e8"
	I1019 13:05:36.335881  434973 cri.go:89] found id: "9036d93a9870e51a8553d29c237178734288ec8578cd01fe4a9d30733a29a989"
	I1019 13:05:36.335885  434973 cri.go:89] found id: "c87f85518ffefcd9ed464c1e8ec3f02cb34777237b1b757d35de45530e13d804"
	I1019 13:05:36.335893  434973 cri.go:89] found id: "9de130db3a61f28e3afc80c22ab1dcda87eb80e3e5cad06bbdf1723cbbc02659"
	I1019 13:05:36.335897  434973 cri.go:89] found id: "fa5349ebdab5aa344012950f607a1526ac8a79065f14d86c23329d96790f97a2"
	I1019 13:05:36.335904  434973 cri.go:89] found id: "5d464678ea1d810867398d806ef9ecea0b7e7e536a9ccd4a7598f0cb18a5d5e8"
	I1019 13:05:36.335908  434973 cri.go:89] found id: "bb49a02b287e654e3bf830c5ec876e1c796bfe354b6a4345250db63f8963a09b"
	I1019 13:05:36.335911  434973 cri.go:89] found id: "0e93a892e96f5ce20eb832477b72857cd295910746fafbd1f048bbf773aaaed1"
	I1019 13:05:36.335914  434973 cri.go:89] found id: "2f49b3722734ec5fa7cb1b7440bec821f2cfc59804041aba24306e9dcc504795"
	I1019 13:05:36.335917  434973 cri.go:89] found id: "d676f6db0dd2dacfd3bf4b36c2ba236c4e1cae0c8626d009575ea36888e03436"
	I1019 13:05:36.335920  434973 cri.go:89] found id: ""
	I1019 13:05:36.335973  434973 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:05:36.349966  434973 retry.go:31] will retry after 320.342098ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:05:36Z" level=error msg="open /run/runc: no such file or directory"
	I1019 13:05:36.670446  434973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:05:36.686242  434973 pause.go:52] kubelet running: false
	I1019 13:05:36.686304  434973 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:05:36.880813  434973 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:05:36.880894  434973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:05:36.972197  434973 cri.go:89] found id: "c743f2b2cc5739d6f671d60d15ea27e27dfa0dc935153abf39b1ade383be12c8"
	I1019 13:05:36.972223  434973 cri.go:89] found id: "962513f0b7d745c9f24d3922de11904ff5dac0b2b94327d9b2481cfa5d29c246"
	I1019 13:05:36.972239  434973 cri.go:89] found id: "ba8f62ba490de8933a05b5c6dbf528ace592eba1da16695b3e24170c833da729"
	I1019 13:05:36.972243  434973 cri.go:89] found id: "d7ffff287898431d46f269ae1eba7808cb5fa242b40b83b1d32861a66655d7a8"
	I1019 13:05:36.972246  434973 cri.go:89] found id: "8bdbd9430d1867563c01e9db16c16d8bfc47dfbd4064de68b62e3c608fc7b2e8"
	I1019 13:05:36.972251  434973 cri.go:89] found id: "9036d93a9870e51a8553d29c237178734288ec8578cd01fe4a9d30733a29a989"
	I1019 13:05:36.972254  434973 cri.go:89] found id: "c87f85518ffefcd9ed464c1e8ec3f02cb34777237b1b757d35de45530e13d804"
	I1019 13:05:36.972257  434973 cri.go:89] found id: "9de130db3a61f28e3afc80c22ab1dcda87eb80e3e5cad06bbdf1723cbbc02659"
	I1019 13:05:36.972259  434973 cri.go:89] found id: "fa5349ebdab5aa344012950f607a1526ac8a79065f14d86c23329d96790f97a2"
	I1019 13:05:36.972266  434973 cri.go:89] found id: "5d464678ea1d810867398d806ef9ecea0b7e7e536a9ccd4a7598f0cb18a5d5e8"
	I1019 13:05:36.972274  434973 cri.go:89] found id: "bb49a02b287e654e3bf830c5ec876e1c796bfe354b6a4345250db63f8963a09b"
	I1019 13:05:36.972278  434973 cri.go:89] found id: "0e93a892e96f5ce20eb832477b72857cd295910746fafbd1f048bbf773aaaed1"
	I1019 13:05:36.972281  434973 cri.go:89] found id: "2f49b3722734ec5fa7cb1b7440bec821f2cfc59804041aba24306e9dcc504795"
	I1019 13:05:36.972293  434973 cri.go:89] found id: "d676f6db0dd2dacfd3bf4b36c2ba236c4e1cae0c8626d009575ea36888e03436"
	I1019 13:05:36.972301  434973 cri.go:89] found id: ""
	I1019 13:05:36.972353  434973 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:05:36.990758  434973 out.go:203] 
	W1019 13:05:36.993856  434973 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:05:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:05:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 13:05:36.993876  434973 out.go:285] * 
	* 
	W1019 13:05:37.500664  434973 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 13:05:37.506344  434973 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-052658 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-052658
helpers_test.go:243: (dbg) docker inspect pause-052658:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7495ccad9f7a4f485a483c10bd6f00b319e7cfcf636345e499f68fd6e4ad8087",
	        "Created": "2025-10-19T13:03:41.72178409Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 425534,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T13:03:42.603837656Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/7495ccad9f7a4f485a483c10bd6f00b319e7cfcf636345e499f68fd6e4ad8087/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7495ccad9f7a4f485a483c10bd6f00b319e7cfcf636345e499f68fd6e4ad8087/hostname",
	        "HostsPath": "/var/lib/docker/containers/7495ccad9f7a4f485a483c10bd6f00b319e7cfcf636345e499f68fd6e4ad8087/hosts",
	        "LogPath": "/var/lib/docker/containers/7495ccad9f7a4f485a483c10bd6f00b319e7cfcf636345e499f68fd6e4ad8087/7495ccad9f7a4f485a483c10bd6f00b319e7cfcf636345e499f68fd6e4ad8087-json.log",
	        "Name": "/pause-052658",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-052658:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-052658",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7495ccad9f7a4f485a483c10bd6f00b319e7cfcf636345e499f68fd6e4ad8087",
	                "LowerDir": "/var/lib/docker/overlay2/1c57f188120ea913097609baa89338ca49cf0eeccf67bd6bb88ea1d5f92ca438-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c57f188120ea913097609baa89338ca49cf0eeccf67bd6bb88ea1d5f92ca438/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c57f188120ea913097609baa89338ca49cf0eeccf67bd6bb88ea1d5f92ca438/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c57f188120ea913097609baa89338ca49cf0eeccf67bd6bb88ea1d5f92ca438/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-052658",
	                "Source": "/var/lib/docker/volumes/pause-052658/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-052658",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-052658",
	                "name.minikube.sigs.k8s.io": "pause-052658",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3b261fb9990d2e5b2aef68f49d751f395ed55d00eff9556a51053141754e68d8",
	            "SandboxKey": "/var/run/docker/netns/3b261fb9990d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33343"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33344"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33347"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33345"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33346"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-052658": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:7a:4b:a3:3f:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "940a40da5d48c45cd8a419b4d8ee2424ed671266b562c672c2d6bce42aaa1ea7",
	                    "EndpointID": "c7972a9f3e02509da05d4a79295271b5c7de1519a04f178526f5d2d25765b859",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-052658",
	                        "7495ccad9f7a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-052658 -n pause-052658
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-052658 -n pause-052658: exit status 2 (462.018493ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-052658 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-052658 logs -n 25: (1.808795531s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p test-preload-774430                                                                                                                   │ test-preload-774430         │ jenkins │ v1.37.0 │ 19 Oct 25 13:00 UTC │ 19 Oct 25 13:00 UTC │
	│ start   │ -p test-preload-774430 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                        │ test-preload-774430         │ jenkins │ v1.37.0 │ 19 Oct 25 13:00 UTC │ 19 Oct 25 13:01 UTC │
	│ image   │ test-preload-774430 image list                                                                                                           │ test-preload-774430         │ jenkins │ v1.37.0 │ 19 Oct 25 13:01 UTC │ 19 Oct 25 13:01 UTC │
	│ delete  │ -p test-preload-774430                                                                                                                   │ test-preload-774430         │ jenkins │ v1.37.0 │ 19 Oct 25 13:01 UTC │ 19 Oct 25 13:01 UTC │
	│ start   │ -p scheduled-stop-739112 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:01 UTC │ 19 Oct 25 13:02 UTC │
	│ stop    │ -p scheduled-stop-739112 --schedule 5m                                                                                                   │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │                     │
	│ stop    │ -p scheduled-stop-739112 --schedule 5m                                                                                                   │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │                     │
	│ stop    │ -p scheduled-stop-739112 --schedule 5m                                                                                                   │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │                     │
	│ stop    │ -p scheduled-stop-739112 --schedule 15s                                                                                                  │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │                     │
	│ stop    │ -p scheduled-stop-739112 --schedule 15s                                                                                                  │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │                     │
	│ stop    │ -p scheduled-stop-739112 --schedule 15s                                                                                                  │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │                     │
	│ stop    │ -p scheduled-stop-739112 --cancel-scheduled                                                                                              │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │ 19 Oct 25 13:02 UTC │
	│ stop    │ -p scheduled-stop-739112 --schedule 15s                                                                                                  │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │                     │
	│ stop    │ -p scheduled-stop-739112 --schedule 15s                                                                                                  │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │                     │
	│ stop    │ -p scheduled-stop-739112 --schedule 15s                                                                                                  │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │ 19 Oct 25 13:02 UTC │
	│ delete  │ -p scheduled-stop-739112                                                                                                                 │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:03 UTC │ 19 Oct 25 13:03 UTC │
	│ start   │ -p insufficient-storage-126728 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-126728 │ jenkins │ v1.37.0 │ 19 Oct 25 13:03 UTC │                     │
	│ delete  │ -p insufficient-storage-126728                                                                                                           │ insufficient-storage-126728 │ jenkins │ v1.37.0 │ 19 Oct 25 13:03 UTC │ 19 Oct 25 13:03 UTC │
	│ start   │ -p pause-052658 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-052658                │ jenkins │ v1.37.0 │ 19 Oct 25 13:03 UTC │ 19 Oct 25 13:05 UTC │
	│ start   │ -p missing-upgrade-754625 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-754625      │ jenkins │ v1.32.0 │ 19 Oct 25 13:03 UTC │ 19 Oct 25 13:04 UTC │
	│ start   │ -p missing-upgrade-754625 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-754625      │ jenkins │ v1.37.0 │ 19 Oct 25 13:04 UTC │ 19 Oct 25 13:05 UTC │
	│ start   │ -p pause-052658 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-052658                │ jenkins │ v1.37.0 │ 19 Oct 25 13:05 UTC │ 19 Oct 25 13:05 UTC │
	│ delete  │ -p missing-upgrade-754625                                                                                                                │ missing-upgrade-754625      │ jenkins │ v1.37.0 │ 19 Oct 25 13:05 UTC │ 19 Oct 25 13:05 UTC │
	│ start   │ -p kubernetes-upgrade-104724 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-104724   │ jenkins │ v1.37.0 │ 19 Oct 25 13:05 UTC │                     │
	│ pause   │ -p pause-052658 --alsologtostderr -v=5                                                                                                   │ pause-052658                │ jenkins │ v1.37.0 │ 19 Oct 25 13:05 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 13:05:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 13:05:23.999474  433679 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:05:23.999726  433679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:05:23.999754  433679 out.go:374] Setting ErrFile to fd 2...
	I1019 13:05:23.999773  433679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:05:24.000104  433679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:05:24.000686  433679 out.go:368] Setting JSON to false
	I1019 13:05:24.006414  433679 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10074,"bootTime":1760869050,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 13:05:24.006557  433679 start.go:141] virtualization:  
	I1019 13:05:24.012319  433679 out.go:179] * [kubernetes-upgrade-104724] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 13:05:24.015654  433679 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:05:24.015735  433679 notify.go:220] Checking for updates...
	I1019 13:05:24.021779  433679 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:05:24.024768  433679 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:05:24.027842  433679 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 13:05:24.031149  433679 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 13:05:24.034302  433679 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:05:24.038634  433679 config.go:182] Loaded profile config "pause-052658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:05:24.038798  433679 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:05:24.089720  433679 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 13:05:24.089838  433679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:05:24.193800  433679 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 13:05:24.182900924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:05:24.193908  433679 docker.go:318] overlay module found
	I1019 13:05:24.196965  433679 out.go:179] * Using the docker driver based on user configuration
	I1019 13:05:24.200009  433679 start.go:305] selected driver: docker
	I1019 13:05:24.200032  433679 start.go:925] validating driver "docker" against <nil>
	I1019 13:05:24.200045  433679 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:05:24.200789  433679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:05:24.303954  433679 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 13:05:24.293934893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:05:24.304097  433679 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 13:05:24.304317  433679 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 13:05:24.307020  433679 out.go:179] * Using Docker driver with root privileges
	I1019 13:05:24.309733  433679 cni.go:84] Creating CNI manager for ""
	I1019 13:05:24.309805  433679 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:05:24.309814  433679 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 13:05:24.309891  433679 start.go:349] cluster config:
	{Name:kubernetes-upgrade-104724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-104724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:05:24.312966  433679 out.go:179] * Starting "kubernetes-upgrade-104724" primary control-plane node in "kubernetes-upgrade-104724" cluster
	I1019 13:05:24.315869  433679 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 13:05:24.318908  433679 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 13:05:24.321763  433679 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 13:05:24.321817  433679 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1019 13:05:24.321826  433679 cache.go:58] Caching tarball of preloaded images
	I1019 13:05:24.321921  433679 preload.go:233] Found /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 13:05:24.321931  433679 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1019 13:05:24.322064  433679 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/config.json ...
	I1019 13:05:24.322082  433679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/config.json: {Name:mk371363c3bcd2d0294e4b2364a286d16f3cccb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:05:24.322206  433679 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 13:05:24.353978  433679 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 13:05:24.353999  433679 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 13:05:24.354019  433679 cache.go:232] Successfully downloaded all kic artifacts
	I1019 13:05:24.354042  433679 start.go:360] acquireMachinesLock for kubernetes-upgrade-104724: {Name:mk582c3f8c76afc27224c5c18a2f06c352280fba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:05:24.354796  433679 start.go:364] duration metric: took 734.993µs to acquireMachinesLock for "kubernetes-upgrade-104724"
	I1019 13:05:24.354835  433679 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-104724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-104724 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:05:24.354900  433679 start.go:125] createHost starting for "" (driver="docker")
	I1019 13:05:23.799220  431885 node_ready.go:49] node "pause-052658" is "Ready"
	I1019 13:05:23.799247  431885 node_ready.go:38] duration metric: took 4.674232289s for node "pause-052658" to be "Ready" ...
	I1019 13:05:23.799262  431885 api_server.go:52] waiting for apiserver process to appear ...
	I1019 13:05:23.799319  431885 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 13:05:23.825225  431885 api_server.go:72] duration metric: took 5.06067273s to wait for apiserver process to appear ...
	I1019 13:05:23.825256  431885 api_server.go:88] waiting for apiserver healthz status ...
	I1019 13:05:23.825286  431885 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 13:05:23.890974  431885 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 13:05:23.891006  431885 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 13:05:24.325410  431885 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 13:05:24.337200  431885 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 13:05:24.337231  431885 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 13:05:24.825763  431885 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 13:05:24.841528  431885 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 13:05:24.843959  431885 api_server.go:141] control plane version: v1.34.1
	I1019 13:05:24.844034  431885 api_server.go:131] duration metric: took 1.018769315s to wait for apiserver health ...
	I1019 13:05:24.844056  431885 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 13:05:24.850773  431885 system_pods.go:59] 7 kube-system pods found
	I1019 13:05:24.850810  431885 system_pods.go:61] "coredns-66bc5c9577-9fkgs" [bc66b89c-607e-43bb-bf8d-cd5963f3e7df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:05:24.850819  431885 system_pods.go:61] "etcd-pause-052658" [869c76be-f363-43df-a65f-495af9c817d4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:05:24.850824  431885 system_pods.go:61] "kindnet-58smf" [a0499250-d06a-41f4-9f84-bc7972eb976b] Running
	I1019 13:05:24.850831  431885 system_pods.go:61] "kube-apiserver-pause-052658" [14d2e5a9-d322-4636-8cf0-ab7c8d49f95a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:05:24.850844  431885 system_pods.go:61] "kube-controller-manager-pause-052658" [13e451b2-eb9f-411e-a150-4d58198964f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 13:05:24.850854  431885 system_pods.go:61] "kube-proxy-8xzhr" [02e96e3a-3380-49f9-b471-ec534e19fe43] Running
	I1019 13:05:24.850861  431885 system_pods.go:61] "kube-scheduler-pause-052658" [4eee30ae-e88b-48a7-8e79-71cfa6b2ec5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 13:05:24.850867  431885 system_pods.go:74] duration metric: took 6.791261ms to wait for pod list to return data ...
	I1019 13:05:24.850876  431885 default_sa.go:34] waiting for default service account to be created ...
	I1019 13:05:24.856545  431885 default_sa.go:45] found service account: "default"
	I1019 13:05:24.856570  431885 default_sa.go:55] duration metric: took 5.688271ms for default service account to be created ...
	I1019 13:05:24.856579  431885 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 13:05:24.862759  431885 system_pods.go:86] 7 kube-system pods found
	I1019 13:05:24.862856  431885 system_pods.go:89] "coredns-66bc5c9577-9fkgs" [bc66b89c-607e-43bb-bf8d-cd5963f3e7df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:05:24.862881  431885 system_pods.go:89] "etcd-pause-052658" [869c76be-f363-43df-a65f-495af9c817d4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:05:24.862927  431885 system_pods.go:89] "kindnet-58smf" [a0499250-d06a-41f4-9f84-bc7972eb976b] Running
	I1019 13:05:24.862955  431885 system_pods.go:89] "kube-apiserver-pause-052658" [14d2e5a9-d322-4636-8cf0-ab7c8d49f95a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:05:24.862987  431885 system_pods.go:89] "kube-controller-manager-pause-052658" [13e451b2-eb9f-411e-a150-4d58198964f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 13:05:24.863023  431885 system_pods.go:89] "kube-proxy-8xzhr" [02e96e3a-3380-49f9-b471-ec534e19fe43] Running
	I1019 13:05:24.863045  431885 system_pods.go:89] "kube-scheduler-pause-052658" [4eee30ae-e88b-48a7-8e79-71cfa6b2ec5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 13:05:24.863067  431885 system_pods.go:126] duration metric: took 6.481711ms to wait for k8s-apps to be running ...
	I1019 13:05:24.863108  431885 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 13:05:24.863195  431885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:05:24.880981  431885 system_svc.go:56] duration metric: took 17.864045ms WaitForService to wait for kubelet
	I1019 13:05:24.881071  431885 kubeadm.go:586] duration metric: took 6.116523713s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:05:24.881106  431885 node_conditions.go:102] verifying NodePressure condition ...
	I1019 13:05:24.886979  431885 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 13:05:24.887061  431885 node_conditions.go:123] node cpu capacity is 2
	I1019 13:05:24.887087  431885 node_conditions.go:105] duration metric: took 5.946236ms to run NodePressure ...
	I1019 13:05:24.887112  431885 start.go:241] waiting for startup goroutines ...
	I1019 13:05:24.887148  431885 start.go:246] waiting for cluster config update ...
	I1019 13:05:24.887175  431885 start.go:255] writing updated cluster config ...
	I1019 13:05:24.887543  431885 ssh_runner.go:195] Run: rm -f paused
	I1019 13:05:24.899776  431885 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:05:24.900365  431885 kapi.go:59] client config for pause-052658: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-292654/.minikube/profiles/pause-052658/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-292654/.minikube/profiles/pause-052658/client.key", CAFile:"/home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21201f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 13:05:24.903320  431885 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9fkgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:24.358246  433679 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 13:05:24.358489  433679 start.go:159] libmachine.API.Create for "kubernetes-upgrade-104724" (driver="docker")
	I1019 13:05:24.358534  433679 client.go:168] LocalClient.Create starting
	I1019 13:05:24.358617  433679 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem
	I1019 13:05:24.358652  433679 main.go:141] libmachine: Decoding PEM data...
	I1019 13:05:24.358665  433679 main.go:141] libmachine: Parsing certificate...
	I1019 13:05:24.358717  433679 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem
	I1019 13:05:24.358734  433679 main.go:141] libmachine: Decoding PEM data...
	I1019 13:05:24.358750  433679 main.go:141] libmachine: Parsing certificate...
	I1019 13:05:24.359123  433679 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-104724 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 13:05:24.377468  433679 cli_runner.go:211] docker network inspect kubernetes-upgrade-104724 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 13:05:24.377555  433679 network_create.go:284] running [docker network inspect kubernetes-upgrade-104724] to gather additional debugging logs...
	I1019 13:05:24.377572  433679 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-104724
	W1019 13:05:24.398680  433679 cli_runner.go:211] docker network inspect kubernetes-upgrade-104724 returned with exit code 1
	I1019 13:05:24.398718  433679 network_create.go:287] error running [docker network inspect kubernetes-upgrade-104724]: docker network inspect kubernetes-upgrade-104724: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-104724 not found
	I1019 13:05:24.398731  433679 network_create.go:289] output of [docker network inspect kubernetes-upgrade-104724]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-104724 not found
	
	** /stderr **
	I1019 13:05:24.398837  433679 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:05:24.419702  433679 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-319c97358c5c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2a:99:c3:44:12:51} reservation:<nil>}
	I1019 13:05:24.420004  433679 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5c09b33e0936 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:93:4b:f6:fd:1c} reservation:<nil>}
	I1019 13:05:24.420345  433679 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2c2bbaadd4a8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:8f:96:27:48:2c} reservation:<nil>}
	I1019 13:05:24.420618  433679 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-940a40da5d48 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:12:99:02:79:b8:9e} reservation:<nil>}
	I1019 13:05:24.421009  433679 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d9ff0}
	I1019 13:05:24.421026  433679 network_create.go:124] attempt to create docker network kubernetes-upgrade-104724 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1019 13:05:24.421084  433679 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-104724 kubernetes-upgrade-104724
	I1019 13:05:24.492304  433679 network_create.go:108] docker network kubernetes-upgrade-104724 192.168.85.0/24 created
	I1019 13:05:24.492332  433679 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-104724" container
	I1019 13:05:24.492421  433679 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 13:05:24.508979  433679 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-104724 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-104724 --label created_by.minikube.sigs.k8s.io=true
	I1019 13:05:24.529808  433679 oci.go:103] Successfully created a docker volume kubernetes-upgrade-104724
	I1019 13:05:24.529907  433679 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-104724-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-104724 --entrypoint /usr/bin/test -v kubernetes-upgrade-104724:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 13:05:25.138321  433679 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-104724
	I1019 13:05:25.138371  433679 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 13:05:25.138393  433679 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 13:05:25.138460  433679 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-104724:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1019 13:05:26.910504  431885 pod_ready.go:104] pod "coredns-66bc5c9577-9fkgs" is not "Ready", error: <nil>
	W1019 13:05:29.409410  431885 pod_ready.go:104] pod "coredns-66bc5c9577-9fkgs" is not "Ready", error: <nil>
	I1019 13:05:29.909559  431885 pod_ready.go:94] pod "coredns-66bc5c9577-9fkgs" is "Ready"
	I1019 13:05:29.909591  431885 pod_ready.go:86] duration metric: took 5.006249898s for pod "coredns-66bc5c9577-9fkgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:29.912419  431885 pod_ready.go:83] waiting for pod "etcd-pause-052658" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:30.105989  433679 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-104724:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.967458731s)
	I1019 13:05:30.106032  433679 kic.go:203] duration metric: took 4.967629359s to extract preloaded images to volume ...
	W1019 13:05:30.106194  433679 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 13:05:30.106313  433679 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 13:05:30.183828  433679 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-104724 --name kubernetes-upgrade-104724 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-104724 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-104724 --network kubernetes-upgrade-104724 --ip 192.168.85.2 --volume kubernetes-upgrade-104724:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 13:05:30.498016  433679 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-104724 --format={{.State.Running}}
	I1019 13:05:30.520881  433679 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-104724 --format={{.State.Status}}
	I1019 13:05:30.546770  433679 cli_runner.go:164] Run: docker exec kubernetes-upgrade-104724 stat /var/lib/dpkg/alternatives/iptables
	I1019 13:05:30.606425  433679 oci.go:144] the created container "kubernetes-upgrade-104724" has a running status.
	I1019 13:05:30.606460  433679 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/kubernetes-upgrade-104724/id_rsa...
	I1019 13:05:31.327475  433679 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-292654/.minikube/machines/kubernetes-upgrade-104724/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 13:05:31.346693  433679 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-104724 --format={{.State.Status}}
	I1019 13:05:31.362800  433679 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 13:05:31.362823  433679 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-104724 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 13:05:31.404734  433679 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-104724 --format={{.State.Status}}
	I1019 13:05:31.425166  433679 machine.go:93] provisionDockerMachine start ...
	I1019 13:05:31.425270  433679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104724
	I1019 13:05:31.443108  433679 main.go:141] libmachine: Using SSH client type: native
	I1019 13:05:31.443439  433679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33358 <nil> <nil>}
	I1019 13:05:31.443455  433679 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 13:05:31.444072  433679 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1019 13:05:31.917534  431885 pod_ready.go:104] pod "etcd-pause-052658" is not "Ready", error: <nil>
	I1019 13:05:32.417770  431885 pod_ready.go:94] pod "etcd-pause-052658" is "Ready"
	I1019 13:05:32.417797  431885 pod_ready.go:86] duration metric: took 2.505353857s for pod "etcd-pause-052658" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:32.420176  431885 pod_ready.go:83] waiting for pod "kube-apiserver-pause-052658" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:32.424861  431885 pod_ready.go:94] pod "kube-apiserver-pause-052658" is "Ready"
	I1019 13:05:32.424893  431885 pod_ready.go:86] duration metric: took 4.684293ms for pod "kube-apiserver-pause-052658" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:32.427387  431885 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-052658" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:32.432166  431885 pod_ready.go:94] pod "kube-controller-manager-pause-052658" is "Ready"
	I1019 13:05:32.432193  431885 pod_ready.go:86] duration metric: took 4.777636ms for pod "kube-controller-manager-pause-052658" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:32.434396  431885 pod_ready.go:83] waiting for pod "kube-proxy-8xzhr" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:32.708221  431885 pod_ready.go:94] pod "kube-proxy-8xzhr" is "Ready"
	I1019 13:05:32.708297  431885 pod_ready.go:86] duration metric: took 273.876153ms for pod "kube-proxy-8xzhr" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:32.907427  431885 pod_ready.go:83] waiting for pod "kube-scheduler-pause-052658" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:34.914105  431885 pod_ready.go:94] pod "kube-scheduler-pause-052658" is "Ready"
	I1019 13:05:34.914136  431885 pod_ready.go:86] duration metric: took 2.006678618s for pod "kube-scheduler-pause-052658" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:34.914148  431885 pod_ready.go:40] duration metric: took 10.014311746s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:05:34.974531  431885 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 13:05:34.978438  431885 out.go:179] * Done! kubectl is now configured to use "pause-052658" cluster and "default" namespace by default
	I1019 13:05:34.609622  433679 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-104724
	
	I1019 13:05:34.609650  433679 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-104724"
	I1019 13:05:34.609738  433679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104724
	I1019 13:05:34.629106  433679 main.go:141] libmachine: Using SSH client type: native
	I1019 13:05:34.629483  433679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33358 <nil> <nil>}
	I1019 13:05:34.629500  433679 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-104724 && echo "kubernetes-upgrade-104724" | sudo tee /etc/hostname
	I1019 13:05:34.787448  433679 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-104724
	
	I1019 13:05:34.787529  433679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104724
	I1019 13:05:34.805370  433679 main.go:141] libmachine: Using SSH client type: native
	I1019 13:05:34.805710  433679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33358 <nil> <nil>}
	I1019 13:05:34.805752  433679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-104724' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-104724/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-104724' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 13:05:34.959643  433679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 13:05:34.959678  433679 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 13:05:34.959698  433679 ubuntu.go:190] setting up certificates
	I1019 13:05:34.959707  433679 provision.go:84] configureAuth start
	I1019 13:05:34.959767  433679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-104724
	I1019 13:05:34.980487  433679 provision.go:143] copyHostCerts
	I1019 13:05:34.980574  433679 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem, removing ...
	I1019 13:05:34.980587  433679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem
	I1019 13:05:34.980660  433679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 13:05:34.980756  433679 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem, removing ...
	I1019 13:05:34.980761  433679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem
	I1019 13:05:34.980791  433679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 13:05:34.980875  433679 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem, removing ...
	I1019 13:05:34.980879  433679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem
	I1019 13:05:34.980904  433679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 13:05:34.980957  433679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-104724 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-104724 localhost minikube]
	I1019 13:05:35.470472  433679 provision.go:177] copyRemoteCerts
	I1019 13:05:35.470593  433679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 13:05:35.470664  433679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104724
	I1019 13:05:35.499866  433679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33358 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/kubernetes-upgrade-104724/id_rsa Username:docker}
	I1019 13:05:35.623309  433679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 13:05:35.646661  433679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1019 13:05:35.666519  433679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 13:05:35.685418  433679 provision.go:87] duration metric: took 725.686581ms to configureAuth
	I1019 13:05:35.685445  433679 ubuntu.go:206] setting minikube options for container-runtime
	I1019 13:05:35.685630  433679 config.go:182] Loaded profile config "kubernetes-upgrade-104724": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 13:05:35.685836  433679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104724
	I1019 13:05:35.708630  433679 main.go:141] libmachine: Using SSH client type: native
	I1019 13:05:35.708933  433679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33358 <nil> <nil>}
	I1019 13:05:35.708948  433679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 13:05:36.063946  433679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 13:05:36.063972  433679 machine.go:96] duration metric: took 4.6387842s to provisionDockerMachine
	I1019 13:05:36.063983  433679 client.go:171] duration metric: took 11.7054422s to LocalClient.Create
	I1019 13:05:36.063995  433679 start.go:167] duration metric: took 11.7055074s to libmachine.API.Create "kubernetes-upgrade-104724"
	I1019 13:05:36.064002  433679 start.go:293] postStartSetup for "kubernetes-upgrade-104724" (driver="docker")
	I1019 13:05:36.064012  433679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 13:05:36.064091  433679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 13:05:36.064138  433679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104724
	I1019 13:05:36.095211  433679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33358 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/kubernetes-upgrade-104724/id_rsa Username:docker}
	I1019 13:05:36.207589  433679 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 13:05:36.215040  433679 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 13:05:36.215081  433679 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 13:05:36.215095  433679 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/addons for local assets ...
	I1019 13:05:36.215154  433679 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/files for local assets ...
	I1019 13:05:36.215239  433679 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem -> 2945182.pem in /etc/ssl/certs
	I1019 13:05:36.215338  433679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 13:05:36.225612  433679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:05:36.246612  433679 start.go:296] duration metric: took 182.596169ms for postStartSetup
	I1019 13:05:36.247061  433679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-104724
	I1019 13:05:36.267904  433679 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/config.json ...
	I1019 13:05:36.268188  433679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 13:05:36.268230  433679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104724
	I1019 13:05:36.289770  433679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33358 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/kubernetes-upgrade-104724/id_rsa Username:docker}
	I1019 13:05:36.394948  433679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 13:05:36.400098  433679 start.go:128] duration metric: took 12.045163556s to createHost
	I1019 13:05:36.400174  433679 start.go:83] releasing machines lock for "kubernetes-upgrade-104724", held for 12.045360613s
	I1019 13:05:36.400269  433679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-104724
	I1019 13:05:36.418511  433679 ssh_runner.go:195] Run: cat /version.json
	I1019 13:05:36.418568  433679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104724
	I1019 13:05:36.418626  433679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 13:05:36.418694  433679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104724
	I1019 13:05:36.437187  433679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33358 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/kubernetes-upgrade-104724/id_rsa Username:docker}
	I1019 13:05:36.443247  433679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33358 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/kubernetes-upgrade-104724/id_rsa Username:docker}
	I1019 13:05:36.546387  433679 ssh_runner.go:195] Run: systemctl --version
	I1019 13:05:36.640102  433679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 13:05:36.679251  433679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 13:05:36.686450  433679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 13:05:36.686520  433679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 13:05:36.725459  433679 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 13:05:36.725483  433679 start.go:495] detecting cgroup driver to use...
	I1019 13:05:36.725515  433679 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 13:05:36.725565  433679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 13:05:36.750118  433679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 13:05:36.767955  433679 docker.go:218] disabling cri-docker service (if available) ...
	I1019 13:05:36.768015  433679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 13:05:36.793575  433679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 13:05:36.812085  433679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 13:05:36.974911  433679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 13:05:37.113802  433679 docker.go:234] disabling docker service ...
	I1019 13:05:37.113921  433679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 13:05:37.139545  433679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 13:05:37.155482  433679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 13:05:37.280567  433679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 13:05:37.397969  433679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 13:05:37.411655  433679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 13:05:37.426136  433679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1019 13:05:37.426238  433679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:37.435416  433679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 13:05:37.435528  433679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:37.444664  433679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:37.453998  433679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:37.462875  433679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 13:05:37.471494  433679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:37.480993  433679 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:37.495041  433679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:37.507694  433679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 13:05:37.528945  433679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 13:05:37.539960  433679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:05:37.694150  433679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 13:05:37.852113  433679 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 13:05:37.852186  433679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 13:05:37.858285  433679 start.go:563] Will wait 60s for crictl version
	I1019 13:05:37.858351  433679 ssh_runner.go:195] Run: which crictl
	I1019 13:05:37.863890  433679 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 13:05:37.899036  433679 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 13:05:37.899116  433679 ssh_runner.go:195] Run: crio --version
	I1019 13:05:37.936981  433679 ssh_runner.go:195] Run: crio --version
	I1019 13:05:37.984183  433679 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.718986319Z" level=info msg="Started container" PID=2281 containerID=8bdbd9430d1867563c01e9db16c16d8bfc47dfbd4064de68b62e3c608fc7b2e8 description=kube-system/kube-apiserver-pause-052658/kube-apiserver id=69ad1df2-e73b-453c-903c-d32f7f040258 name=/runtime.v1.RuntimeService/StartContainer sandboxID=28c7e1a3bbecbce188e478e8fe8d4018dc00bf478a6c2b05dd578c3c0c4af827
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.740180078Z" level=info msg="Created container d7ffff287898431d46f269ae1eba7808cb5fa242b40b83b1d32861a66655d7a8: kube-system/etcd-pause-052658/etcd" id=60330a5d-1de0-4fe7-991e-0b55b179cff9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.741920296Z" level=info msg="Created container ba8f62ba490de8933a05b5c6dbf528ace592eba1da16695b3e24170c833da729: kube-system/kube-controller-manager-pause-052658/kube-controller-manager" id=4a31e9d2-25f1-4d10-bc87-8f884232c13b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.744298381Z" level=info msg="Starting container: d7ffff287898431d46f269ae1eba7808cb5fa242b40b83b1d32861a66655d7a8" id=b7b2ffa0-84b7-4f36-93d9-2c559dfaac52 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.745167916Z" level=info msg="Starting container: ba8f62ba490de8933a05b5c6dbf528ace592eba1da16695b3e24170c833da729" id=f462a771-f90d-47ac-a4a5-4df5458976c5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.776413915Z" level=info msg="Started container" PID=2293 containerID=ba8f62ba490de8933a05b5c6dbf528ace592eba1da16695b3e24170c833da729 description=kube-system/kube-controller-manager-pause-052658/kube-controller-manager id=f462a771-f90d-47ac-a4a5-4df5458976c5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=488f88c5437af9ab0686d7a9841e0b312caf8ec2a951f5ecf7c0f9171afe9fc7
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.785965528Z" level=info msg="Started container" PID=2292 containerID=d7ffff287898431d46f269ae1eba7808cb5fa242b40b83b1d32861a66655d7a8 description=kube-system/etcd-pause-052658/etcd id=b7b2ffa0-84b7-4f36-93d9-2c559dfaac52 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a433cdae0dd2f23a31c02b2a415bd577847dc4f8c5f3bec80d9712ab5043f381
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.840400745Z" level=info msg="Created container c743f2b2cc5739d6f671d60d15ea27e27dfa0dc935153abf39b1ade383be12c8: kube-system/kube-scheduler-pause-052658/kube-scheduler" id=b682d546-4ddd-4016-9dfa-7f15a05de5b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.841703779Z" level=info msg="Starting container: c743f2b2cc5739d6f671d60d15ea27e27dfa0dc935153abf39b1ade383be12c8" id=6977f4a6-3e47-4bd4-a737-7cff21959d65 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.864324611Z" level=info msg="Started container" PID=2324 containerID=c743f2b2cc5739d6f671d60d15ea27e27dfa0dc935153abf39b1ade383be12c8 description=kube-system/kube-scheduler-pause-052658/kube-scheduler id=6977f4a6-3e47-4bd4-a737-7cff21959d65 name=/runtime.v1.RuntimeService/StartContainer sandboxID=48128dfc49e4b46422d0cb07dc95debfa03146d43a568b84ea2d5712ea759521
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.890065439Z" level=info msg="Created container 962513f0b7d745c9f24d3922de11904ff5dac0b2b94327d9b2481cfa5d29c246: kube-system/kube-proxy-8xzhr/kube-proxy" id=36454d8e-3a32-4c55-b38a-5197a9eac936 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.897045241Z" level=info msg="Starting container: 962513f0b7d745c9f24d3922de11904ff5dac0b2b94327d9b2481cfa5d29c246" id=4686da78-b137-404e-838b-633b94f9886a name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.90513312Z" level=info msg="Started container" PID=2321 containerID=962513f0b7d745c9f24d3922de11904ff5dac0b2b94327d9b2481cfa5d29c246 description=kube-system/kube-proxy-8xzhr/kube-proxy id=4686da78-b137-404e-838b-633b94f9886a name=/runtime.v1.RuntimeService/StartContainer sandboxID=3de2ee91a006f3aa634654e7652905e329163b0f70b9759b0a5ffcb36a5ecbca
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.055418138Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.063632657Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.063668333Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.063690512Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.075264932Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.07530467Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.075329737Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.0800277Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.080084095Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.080107201Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.084949938Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.084986755Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	c743f2b2cc573       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   21 seconds ago       Running             kube-scheduler            1                   48128dfc49e4b       kube-scheduler-pause-052658            kube-system
	962513f0b7d74       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   21 seconds ago       Running             kube-proxy                1                   3de2ee91a006f       kube-proxy-8xzhr                       kube-system
	ba8f62ba490de       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   21 seconds ago       Running             kube-controller-manager   1                   488f88c5437af       kube-controller-manager-pause-052658   kube-system
	d7ffff2878984       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   21 seconds ago       Running             etcd                      1                   a433cdae0dd2f       etcd-pause-052658                      kube-system
	8bdbd9430d186       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   21 seconds ago       Running             kube-apiserver            1                   28c7e1a3bbecb       kube-apiserver-pause-052658            kube-system
	9036d93a9870e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   f17b27f4129d6       kindnet-58smf                          kube-system
	c87f85518ffef       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   21 seconds ago       Running             coredns                   1                   ec416a9934c79       coredns-66bc5c9577-9fkgs               kube-system
	9de130db3a61f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   35 seconds ago       Exited              coredns                   0                   ec416a9934c79       coredns-66bc5c9577-9fkgs               kube-system
	fa5349ebdab5a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   3de2ee91a006f       kube-proxy-8xzhr                       kube-system
	5d464678ea1d8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   f17b27f4129d6       kindnet-58smf                          kube-system
	bb49a02b287e6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   48128dfc49e4b       kube-scheduler-pause-052658            kube-system
	0e93a892e96f5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   28c7e1a3bbecb       kube-apiserver-pause-052658            kube-system
	2f49b3722734e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   488f88c5437af       kube-controller-manager-pause-052658   kube-system
	d676f6db0dd2d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   a433cdae0dd2f       etcd-pause-052658                      kube-system
	
	
	==> coredns [9de130db3a61f28e3afc80c22ab1dcda87eb80e3e5cad06bbdf1723cbbc02659] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34468 - 64387 "HINFO IN 5659924498181447867.7303966064205267563. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.085421142s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c87f85518ffefcd9ed464c1e8ec3f02cb34777237b1b757d35de45530e13d804] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38639 - 44256 "HINFO IN 3878672707910489575.893701343700373851. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.040468434s
	
	
	==> describe nodes <==
	Name:               pause-052658
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-052658
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=pause-052658
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T13_04_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 13:04:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-052658
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 13:05:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 13:05:02 +0000   Sun, 19 Oct 2025 13:04:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 13:05:02 +0000   Sun, 19 Oct 2025 13:04:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 13:05:02 +0000   Sun, 19 Oct 2025 13:04:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 13:05:02 +0000   Sun, 19 Oct 2025 13:05:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-052658
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                0bbe64b9-a531-48c8-b22c-19bed7ed16a9
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-9fkgs                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     79s
	  kube-system                 etcd-pause-052658                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         84s
	  kube-system                 kindnet-58smf                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      80s
	  kube-system                 kube-apiserver-pause-052658             250m (12%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-controller-manager-pause-052658    200m (10%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-proxy-8xzhr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-pause-052658             100m (5%)     0 (0%)      0 (0%)           0 (0%)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 76s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Warning  CgroupV1                 95s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  95s (x8 over 95s)  kubelet          Node pause-052658 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    95s (x8 over 95s)  kubelet          Node pause-052658 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     95s (x8 over 95s)  kubelet          Node pause-052658 status is now: NodeHasSufficientPID
	  Normal   Starting                 84s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 84s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  84s                kubelet          Node pause-052658 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    84s                kubelet          Node pause-052658 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     84s                kubelet          Node pause-052658 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           80s                node-controller  Node pause-052658 event: Registered Node pause-052658 in Controller
	  Normal   NodeReady                37s                kubelet          Node pause-052658 status is now: NodeReady
	  Warning  ContainerGCFailed        24s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           13s                node-controller  Node pause-052658 event: Registered Node pause-052658 in Controller
	
	
	==> dmesg <==
	[Oct19 12:39] overlayfs: idmapped layers are currently not supported
	[Oct19 12:40] overlayfs: idmapped layers are currently not supported
	[  +3.779280] overlayfs: idmapped layers are currently not supported
	[Oct19 12:41] overlayfs: idmapped layers are currently not supported
	[Oct19 12:42] overlayfs: idmapped layers are currently not supported
	[Oct19 12:43] overlayfs: idmapped layers are currently not supported
	[  +3.355153] overlayfs: idmapped layers are currently not supported
	[Oct19 12:44] overlayfs: idmapped layers are currently not supported
	[ +21.526979] overlayfs: idmapped layers are currently not supported
	[Oct19 12:46] overlayfs: idmapped layers are currently not supported
	[Oct19 12:50] overlayfs: idmapped layers are currently not supported
	[Oct19 12:51] overlayfs: idmapped layers are currently not supported
	[Oct19 12:52] overlayfs: idmapped layers are currently not supported
	[Oct19 12:53] overlayfs: idmapped layers are currently not supported
	[Oct19 12:54] overlayfs: idmapped layers are currently not supported
	[Oct19 12:56] overlayfs: idmapped layers are currently not supported
	[ +16.315179] overlayfs: idmapped layers are currently not supported
	[ +11.914063] overlayfs: idmapped layers are currently not supported
	[Oct19 12:57] overlayfs: idmapped layers are currently not supported
	[Oct19 12:58] overlayfs: idmapped layers are currently not supported
	[ +48.481184] overlayfs: idmapped layers are currently not supported
	[Oct19 12:59] overlayfs: idmapped layers are currently not supported
	[Oct19 13:00] overlayfs: idmapped layers are currently not supported
	[Oct19 13:01] overlayfs: idmapped layers are currently not supported
	[Oct19 13:04] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d676f6db0dd2dacfd3bf4b36c2ba236c4e1cae0c8626d009575ea36888e03436] <==
	{"level":"warn","ts":"2025-10-19T13:04:09.254612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:04:09.294886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:04:09.367455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:04:09.418032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:04:09.470261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:04:09.505761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:04:09.732780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48986","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T13:05:08.699382Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-19T13:05:08.699435Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-052658","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-19T13:05:08.699543Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T13:05:08.699600Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"info","ts":"2025-10-19T13:05:08.867141Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-19T13:05:08.867271Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-19T13:05:08.867291Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-10-19T13:05:08.867077Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-19T13:05:08.867602Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T13:05:08.867635Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T13:05:08.867643Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-19T13:05:08.867688Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T13:05:08.867702Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T13:05:08.867709Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T13:05:08.870606Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-19T13:05:08.870680Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T13:05:08.870717Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-19T13:05:08.870830Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-052658","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [d7ffff287898431d46f269ae1eba7808cb5fa242b40b83b1d32861a66655d7a8] <==
	{"level":"warn","ts":"2025-10-19T13:05:21.480150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.509433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.587475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.632975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.670003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.727922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.751842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.774317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.786574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.847748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.918627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.954037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.032721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.035244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.054557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.093777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.131706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.170510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.184904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.198730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.282343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.302328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.352050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.360189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.535796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36662","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:05:39 up  2:48,  0 user,  load average: 4.12, 2.69, 2.38
	Linux pause-052658 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5d464678ea1d810867398d806ef9ecea0b7e7e536a9ccd4a7598f0cb18a5d5e8] <==
	I1019 13:04:21.914054       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:04:21.914948       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 13:04:21.915123       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:04:21.915235       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:04:21.915279       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:04:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:04:22.134519       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:04:22.134547       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:04:22.134555       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:04:22.135470       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 13:04:52.135277       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 13:04:52.135386       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1019 13:04:52.135459       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 13:04:52.135595       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1019 13:04:53.335171       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 13:04:53.335287       1 metrics.go:72] Registering metrics
	I1019 13:04:53.335368       1 controller.go:711] "Syncing nftables rules"
	I1019 13:05:02.134850       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:05:02.134906       1 main.go:301] handling current node
	
	
	==> kindnet [9036d93a9870e51a8553d29c237178734288ec8578cd01fe4a9d30733a29a989] <==
	I1019 13:05:17.763393       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:05:17.765062       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 13:05:17.765850       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:05:17.805743       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:05:17.805784       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:05:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:05:18.050640       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:05:18.050668       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:05:18.050679       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:05:18.057998       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 13:05:23.853768       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 13:05:23.853861       1 metrics.go:72] Registering metrics
	I1019 13:05:23.860956       1 controller.go:711] "Syncing nftables rules"
	I1019 13:05:28.051233       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:05:28.051281       1 main.go:301] handling current node
	I1019 13:05:38.050841       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:05:38.050881       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0e93a892e96f5ce20eb832477b72857cd295910746fafbd1f048bbf773aaaed1] <==
	W1019 13:05:08.736900       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.736954       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737004       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.736758       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737011       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737132       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737187       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737223       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737288       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737340       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737378       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737435       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737490       1 logging.go:55] [core] [Channel #8 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737532       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737589       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737641       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737672       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.736612       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737497       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737347       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.736150       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.736873       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737193       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737804       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737840       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [8bdbd9430d1867563c01e9db16c16d8bfc47dfbd4064de68b62e3c608fc7b2e8] <==
	I1019 13:05:23.730439       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 13:05:23.730455       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 13:05:23.734378       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 13:05:23.739198       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 13:05:23.739310       1 policy_source.go:240] refreshing policies
	I1019 13:05:23.748031       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 13:05:23.748265       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 13:05:23.749662       1 aggregator.go:171] initial CRD sync complete...
	I1019 13:05:23.751022       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 13:05:23.751099       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 13:05:23.751131       1 cache.go:39] Caches are synced for autoregister controller
	I1019 13:05:23.751385       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1019 13:05:23.751450       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 13:05:23.775546       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 13:05:23.788594       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 13:05:23.789924       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:05:23.794215       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 13:05:23.825276       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1019 13:05:23.894385       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 13:05:24.227571       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 13:05:24.777503       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 13:05:26.173579       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 13:05:26.271842       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 13:05:26.323967       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 13:05:26.424521       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [2f49b3722734ec5fa7cb1b7440bec821f2cfc59804041aba24306e9dcc504795] <==
	I1019 13:04:19.091710       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 13:04:19.091741       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 13:04:19.091767       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 13:04:19.103865       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:04:19.103966       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 13:04:19.103996       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 13:04:19.108653       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-052658" podCIDRs=["10.244.0.0/24"]
	I1019 13:04:19.109896       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 13:04:19.110560       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 13:04:19.120124       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 13:04:19.121358       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 13:04:19.121467       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 13:04:19.121517       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 13:04:19.121780       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 13:04:19.122526       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 13:04:19.123098       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 13:04:19.123258       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 13:04:19.123159       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 13:04:19.125087       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 13:04:19.125178       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 13:04:19.126454       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 13:04:19.128499       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 13:04:19.135077       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:04:19.135187       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:05:04.077634       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [ba8f62ba490de8933a05b5c6dbf528ace592eba1da16695b3e24170c833da729] <==
	I1019 13:05:26.034034       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 13:05:26.040164       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 13:05:26.049172       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 13:05:26.049310       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 13:05:26.049373       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 13:05:26.049405       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 13:05:26.049432       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 13:05:26.049535       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 13:05:26.054055       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 13:05:26.057349       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 13:05:26.057756       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 13:05:26.063866       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 13:05:26.065434       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:05:26.065702       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 13:05:26.071108       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:05:26.071196       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 13:05:26.071228       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 13:05:26.076957       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 13:05:26.076969       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 13:05:26.076988       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 13:05:26.082587       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 13:05:26.086893       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 13:05:26.093171       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 13:05:26.101472       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 13:05:26.104866       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	
	
	==> kube-proxy [962513f0b7d745c9f24d3922de11904ff5dac0b2b94327d9b2481cfa5d29c246] <==
	I1019 13:05:21.478065       1 server_linux.go:53] "Using iptables proxy"
	I1019 13:05:22.502210       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 13:05:23.903537       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 13:05:23.903654       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 13:05:23.903802       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 13:05:23.978859       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:05:23.978987       1 server_linux.go:132] "Using iptables Proxier"
	I1019 13:05:23.993455       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 13:05:23.994063       1 server.go:527] "Version info" version="v1.34.1"
	I1019 13:05:23.994291       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:05:24.001120       1 config.go:200] "Starting service config controller"
	I1019 13:05:24.001233       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 13:05:24.001277       1 config.go:106] "Starting endpoint slice config controller"
	I1019 13:05:24.001321       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 13:05:24.001377       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 13:05:24.001415       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 13:05:24.007469       1 config.go:309] "Starting node config controller"
	I1019 13:05:24.007562       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 13:05:24.007595       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 13:05:24.101376       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 13:05:24.101457       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 13:05:24.101469       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [fa5349ebdab5aa344012950f607a1526ac8a79065f14d86c23329d96790f97a2] <==
	I1019 13:04:22.409866       1 server_linux.go:53] "Using iptables proxy"
	I1019 13:04:22.499188       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 13:04:22.599996       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 13:04:22.600042       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 13:04:22.600126       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 13:04:22.626326       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:04:22.626445       1 server_linux.go:132] "Using iptables Proxier"
	I1019 13:04:22.634658       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 13:04:22.635030       1 server.go:527] "Version info" version="v1.34.1"
	I1019 13:04:22.635239       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:04:22.636491       1 config.go:200] "Starting service config controller"
	I1019 13:04:22.636563       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 13:04:22.636606       1 config.go:106] "Starting endpoint slice config controller"
	I1019 13:04:22.636634       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 13:04:22.636667       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 13:04:22.636695       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 13:04:22.637332       1 config.go:309] "Starting node config controller"
	I1019 13:04:22.644328       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 13:04:22.644408       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 13:04:22.737548       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 13:04:22.737650       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 13:04:22.737688       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bb49a02b287e654e3bf830c5ec876e1c796bfe354b6a4345250db63f8963a09b] <==
	I1019 13:04:09.072429       1 serving.go:386] Generated self-signed cert in-memory
	W1019 13:04:13.490400       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 13:04:13.490430       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 13:04:13.490441       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 13:04:13.490448       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 13:04:13.552754       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 13:04:13.556375       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:04:13.564696       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 13:04:13.564807       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:04:13.564832       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:04:13.564859       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1019 13:04:13.592860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1019 13:04:14.565092       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:05:08.717019       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1019 13:05:08.717047       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1019 13:05:08.717082       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1019 13:05:08.717109       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:05:08.717396       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1019 13:05:08.717435       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c743f2b2cc5739d6f671d60d15ea27e27dfa0dc935153abf39b1ade383be12c8] <==
	I1019 13:05:23.543072       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:05:23.545622       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 13:05:23.555064       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 13:05:23.555147       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:05:23.567152       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1019 13:05:23.668816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 13:05:23.669278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 13:05:23.669351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 13:05:23.669435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 13:05:23.669496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 13:05:23.669551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 13:05:23.669607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 13:05:23.669661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 13:05:23.670673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 13:05:23.670928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 13:05:23.671916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 13:05:23.672554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 13:05:23.672698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 13:05:23.676172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 13:05:23.676212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 13:05:23.676241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 13:05:23.676401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 13:05:23.676514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 13:05:23.694278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1019 13:05:25.269822       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.521989    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-58smf\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a0499250-d06a-41f4-9f84-bc7972eb976b" pod="kube-system/kindnet-58smf"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.522176    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-9fkgs\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bc66b89c-607e-43bb-bf8d-cd5963f3e7df" pod="kube-system/coredns-66bc5c9577-9fkgs"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: I1019 13:05:17.522956    1311 scope.go:117] "RemoveContainer" containerID="bb49a02b287e654e3bf830c5ec876e1c796bfe354b6a4345250db63f8963a09b"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.523560    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-052658\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b8849b9e91fc6b3da9ce2ba93ecc23ce" pod="kube-system/kube-controller-manager-pause-052658"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.523785    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-052658\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7cad4830782aab0ed630ed2b840cc95c" pod="kube-system/kube-scheduler-pause-052658"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.524025    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-052658\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5df87c02bdd2a663aba9a0886d071fc3" pod="kube-system/etcd-pause-052658"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.524263    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-052658\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="227aee2d66fe89c7a2e9965aa151eb74" pod="kube-system/kube-apiserver-pause-052658"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.524485    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-58smf\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a0499250-d06a-41f4-9f84-bc7972eb976b" pod="kube-system/kindnet-58smf"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.524696    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-9fkgs\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bc66b89c-607e-43bb-bf8d-cd5963f3e7df" pod="kube-system/coredns-66bc5c9577-9fkgs"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: I1019 13:05:17.557933    1311 scope.go:117] "RemoveContainer" containerID="fa5349ebdab5aa344012950f607a1526ac8a79065f14d86c23329d96790f97a2"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.558298    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xzhr\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="02e96e3a-3380-49f9-b471-ec534e19fe43" pod="kube-system/kube-proxy-8xzhr"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.558628    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-58smf\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a0499250-d06a-41f4-9f84-bc7972eb976b" pod="kube-system/kindnet-58smf"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.558873    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-9fkgs\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bc66b89c-607e-43bb-bf8d-cd5963f3e7df" pod="kube-system/coredns-66bc5c9577-9fkgs"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.559080    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-052658\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b8849b9e91fc6b3da9ce2ba93ecc23ce" pod="kube-system/kube-controller-manager-pause-052658"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.559306    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-052658\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7cad4830782aab0ed630ed2b840cc95c" pod="kube-system/kube-scheduler-pause-052658"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.559503    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-052658\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5df87c02bdd2a663aba9a0886d071fc3" pod="kube-system/etcd-pause-052658"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.559702    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-052658\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="227aee2d66fe89c7a2e9965aa151eb74" pod="kube-system/kube-apiserver-pause-052658"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.772884    1311 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{etcd-pause-052658.186fe6385f7597b2  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:etcd-pause-052658,UID:5df87c02bdd2a663aba9a0886d071fc3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://127.0.0.1:2381/readyz\": dial tcp 127.0.0.1:2381: connect: connection refused,Source:EventSource{Component:kubelet,Host:pause-052658,},FirstTimestamp:2025-10-19 13:05:09.119252402 +0000 UTC m=+54.288485472,LastTimestamp:2025-10-19 13:05:09.119252402 +0000 UTC m=+54.288485472,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,A
ction:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-052658,}"
	Oct 19 13:05:23 pause-052658 kubelet[1311]: E1019 13:05:23.502342    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-052658\" is forbidden: User \"system:node:pause-052658\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-052658' and this object" podUID="b8849b9e91fc6b3da9ce2ba93ecc23ce" pod="kube-system/kube-controller-manager-pause-052658"
	Oct 19 13:05:23 pause-052658 kubelet[1311]: E1019 13:05:23.502695    1311 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-052658\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-052658' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 19 13:05:23 pause-052658 kubelet[1311]: E1019 13:05:23.668483    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-052658\" is forbidden: User \"system:node:pause-052658\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-052658' and this object" podUID="7cad4830782aab0ed630ed2b840cc95c" pod="kube-system/kube-scheduler-pause-052658"
	Oct 19 13:05:35 pause-052658 kubelet[1311]: W1019 13:05:35.514763    1311 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 19 13:05:35 pause-052658 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 13:05:35 pause-052658 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 13:05:35 pause-052658 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-052658 -n pause-052658
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-052658 -n pause-052658: exit status 2 (549.008971ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-052658 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-052658
helpers_test.go:243: (dbg) docker inspect pause-052658:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7495ccad9f7a4f485a483c10bd6f00b319e7cfcf636345e499f68fd6e4ad8087",
	        "Created": "2025-10-19T13:03:41.72178409Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 425534,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T13:03:42.603837656Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/7495ccad9f7a4f485a483c10bd6f00b319e7cfcf636345e499f68fd6e4ad8087/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7495ccad9f7a4f485a483c10bd6f00b319e7cfcf636345e499f68fd6e4ad8087/hostname",
	        "HostsPath": "/var/lib/docker/containers/7495ccad9f7a4f485a483c10bd6f00b319e7cfcf636345e499f68fd6e4ad8087/hosts",
	        "LogPath": "/var/lib/docker/containers/7495ccad9f7a4f485a483c10bd6f00b319e7cfcf636345e499f68fd6e4ad8087/7495ccad9f7a4f485a483c10bd6f00b319e7cfcf636345e499f68fd6e4ad8087-json.log",
	        "Name": "/pause-052658",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-052658:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-052658",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7495ccad9f7a4f485a483c10bd6f00b319e7cfcf636345e499f68fd6e4ad8087",
	                "LowerDir": "/var/lib/docker/overlay2/1c57f188120ea913097609baa89338ca49cf0eeccf67bd6bb88ea1d5f92ca438-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c57f188120ea913097609baa89338ca49cf0eeccf67bd6bb88ea1d5f92ca438/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c57f188120ea913097609baa89338ca49cf0eeccf67bd6bb88ea1d5f92ca438/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c57f188120ea913097609baa89338ca49cf0eeccf67bd6bb88ea1d5f92ca438/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-052658",
	                "Source": "/var/lib/docker/volumes/pause-052658/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-052658",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-052658",
	                "name.minikube.sigs.k8s.io": "pause-052658",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3b261fb9990d2e5b2aef68f49d751f395ed55d00eff9556a51053141754e68d8",
	            "SandboxKey": "/var/run/docker/netns/3b261fb9990d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33343"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33344"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33347"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33345"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33346"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-052658": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:7a:4b:a3:3f:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "940a40da5d48c45cd8a419b4d8ee2424ed671266b562c672c2d6bce42aaa1ea7",
	                    "EndpointID": "c7972a9f3e02509da05d4a79295271b5c7de1519a04f178526f5d2d25765b859",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-052658",
	                        "7495ccad9f7a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-052658 -n pause-052658
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-052658 -n pause-052658: exit status 2 (460.562612ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-052658 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-052658 logs -n 25: (1.789402018s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p test-preload-774430                                                                                                                   │ test-preload-774430         │ jenkins │ v1.37.0 │ 19 Oct 25 13:00 UTC │ 19 Oct 25 13:00 UTC │
	│ start   │ -p test-preload-774430 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                        │ test-preload-774430         │ jenkins │ v1.37.0 │ 19 Oct 25 13:00 UTC │ 19 Oct 25 13:01 UTC │
	│ image   │ test-preload-774430 image list                                                                                                           │ test-preload-774430         │ jenkins │ v1.37.0 │ 19 Oct 25 13:01 UTC │ 19 Oct 25 13:01 UTC │
	│ delete  │ -p test-preload-774430                                                                                                                   │ test-preload-774430         │ jenkins │ v1.37.0 │ 19 Oct 25 13:01 UTC │ 19 Oct 25 13:01 UTC │
	│ start   │ -p scheduled-stop-739112 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:01 UTC │ 19 Oct 25 13:02 UTC │
	│ stop    │ -p scheduled-stop-739112 --schedule 5m                                                                                                   │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │                     │
	│ stop    │ -p scheduled-stop-739112 --schedule 5m                                                                                                   │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │                     │
	│ stop    │ -p scheduled-stop-739112 --schedule 5m                                                                                                   │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │                     │
	│ stop    │ -p scheduled-stop-739112 --schedule 15s                                                                                                  │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │                     │
	│ stop    │ -p scheduled-stop-739112 --schedule 15s                                                                                                  │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │                     │
	│ stop    │ -p scheduled-stop-739112 --schedule 15s                                                                                                  │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │                     │
	│ stop    │ -p scheduled-stop-739112 --cancel-scheduled                                                                                              │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │ 19 Oct 25 13:02 UTC │
	│ stop    │ -p scheduled-stop-739112 --schedule 15s                                                                                                  │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │                     │
	│ stop    │ -p scheduled-stop-739112 --schedule 15s                                                                                                  │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │                     │
	│ stop    │ -p scheduled-stop-739112 --schedule 15s                                                                                                  │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:02 UTC │ 19 Oct 25 13:02 UTC │
	│ delete  │ -p scheduled-stop-739112                                                                                                                 │ scheduled-stop-739112       │ jenkins │ v1.37.0 │ 19 Oct 25 13:03 UTC │ 19 Oct 25 13:03 UTC │
	│ start   │ -p insufficient-storage-126728 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-126728 │ jenkins │ v1.37.0 │ 19 Oct 25 13:03 UTC │                     │
	│ delete  │ -p insufficient-storage-126728                                                                                                           │ insufficient-storage-126728 │ jenkins │ v1.37.0 │ 19 Oct 25 13:03 UTC │ 19 Oct 25 13:03 UTC │
	│ start   │ -p pause-052658 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-052658                │ jenkins │ v1.37.0 │ 19 Oct 25 13:03 UTC │ 19 Oct 25 13:05 UTC │
	│ start   │ -p missing-upgrade-754625 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-754625      │ jenkins │ v1.32.0 │ 19 Oct 25 13:03 UTC │ 19 Oct 25 13:04 UTC │
	│ start   │ -p missing-upgrade-754625 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-754625      │ jenkins │ v1.37.0 │ 19 Oct 25 13:04 UTC │ 19 Oct 25 13:05 UTC │
	│ start   │ -p pause-052658 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-052658                │ jenkins │ v1.37.0 │ 19 Oct 25 13:05 UTC │ 19 Oct 25 13:05 UTC │
	│ delete  │ -p missing-upgrade-754625                                                                                                                │ missing-upgrade-754625      │ jenkins │ v1.37.0 │ 19 Oct 25 13:05 UTC │ 19 Oct 25 13:05 UTC │
	│ start   │ -p kubernetes-upgrade-104724 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-104724   │ jenkins │ v1.37.0 │ 19 Oct 25 13:05 UTC │                     │
	│ pause   │ -p pause-052658 --alsologtostderr -v=5                                                                                                   │ pause-052658                │ jenkins │ v1.37.0 │ 19 Oct 25 13:05 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 13:05:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 13:05:23.999474  433679 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:05:23.999726  433679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:05:23.999754  433679 out.go:374] Setting ErrFile to fd 2...
	I1019 13:05:23.999773  433679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:05:24.000104  433679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:05:24.000686  433679 out.go:368] Setting JSON to false
	I1019 13:05:24.006414  433679 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10074,"bootTime":1760869050,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 13:05:24.006557  433679 start.go:141] virtualization:  
	I1019 13:05:24.012319  433679 out.go:179] * [kubernetes-upgrade-104724] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 13:05:24.015654  433679 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:05:24.015735  433679 notify.go:220] Checking for updates...
	I1019 13:05:24.021779  433679 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:05:24.024768  433679 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:05:24.027842  433679 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 13:05:24.031149  433679 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 13:05:24.034302  433679 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:05:24.038634  433679 config.go:182] Loaded profile config "pause-052658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:05:24.038798  433679 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:05:24.089720  433679 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 13:05:24.089838  433679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:05:24.193800  433679 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 13:05:24.182900924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:05:24.193908  433679 docker.go:318] overlay module found
	I1019 13:05:24.196965  433679 out.go:179] * Using the docker driver based on user configuration
	I1019 13:05:24.200009  433679 start.go:305] selected driver: docker
	I1019 13:05:24.200032  433679 start.go:925] validating driver "docker" against <nil>
	I1019 13:05:24.200045  433679 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:05:24.200789  433679 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:05:24.303954  433679 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 13:05:24.293934893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:05:24.304097  433679 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 13:05:24.304317  433679 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 13:05:24.307020  433679 out.go:179] * Using Docker driver with root privileges
	I1019 13:05:24.309733  433679 cni.go:84] Creating CNI manager for ""
	I1019 13:05:24.309805  433679 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:05:24.309814  433679 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 13:05:24.309891  433679 start.go:349] cluster config:
	{Name:kubernetes-upgrade-104724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-104724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:05:24.312966  433679 out.go:179] * Starting "kubernetes-upgrade-104724" primary control-plane node in "kubernetes-upgrade-104724" cluster
	I1019 13:05:24.315869  433679 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 13:05:24.318908  433679 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 13:05:24.321763  433679 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 13:05:24.321817  433679 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1019 13:05:24.321826  433679 cache.go:58] Caching tarball of preloaded images
	I1019 13:05:24.321921  433679 preload.go:233] Found /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 13:05:24.321931  433679 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1019 13:05:24.322064  433679 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/config.json ...
	I1019 13:05:24.322082  433679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/config.json: {Name:mk371363c3bcd2d0294e4b2364a286d16f3cccb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:05:24.322206  433679 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 13:05:24.353978  433679 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 13:05:24.353999  433679 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 13:05:24.354019  433679 cache.go:232] Successfully downloaded all kic artifacts
	I1019 13:05:24.354042  433679 start.go:360] acquireMachinesLock for kubernetes-upgrade-104724: {Name:mk582c3f8c76afc27224c5c18a2f06c352280fba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:05:24.354796  433679 start.go:364] duration metric: took 734.993µs to acquireMachinesLock for "kubernetes-upgrade-104724"
	I1019 13:05:24.354835  433679 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-104724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-104724 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:05:24.354900  433679 start.go:125] createHost starting for "" (driver="docker")
	I1019 13:05:23.799220  431885 node_ready.go:49] node "pause-052658" is "Ready"
	I1019 13:05:23.799247  431885 node_ready.go:38] duration metric: took 4.674232289s for node "pause-052658" to be "Ready" ...
	I1019 13:05:23.799262  431885 api_server.go:52] waiting for apiserver process to appear ...
	I1019 13:05:23.799319  431885 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 13:05:23.825225  431885 api_server.go:72] duration metric: took 5.06067273s to wait for apiserver process to appear ...
	I1019 13:05:23.825256  431885 api_server.go:88] waiting for apiserver healthz status ...
	I1019 13:05:23.825286  431885 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 13:05:23.890974  431885 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 13:05:23.891006  431885 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 13:05:24.325410  431885 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 13:05:24.337200  431885 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 13:05:24.337231  431885 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 13:05:24.825763  431885 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 13:05:24.841528  431885 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 13:05:24.843959  431885 api_server.go:141] control plane version: v1.34.1
	I1019 13:05:24.844034  431885 api_server.go:131] duration metric: took 1.018769315s to wait for apiserver health ...
	I1019 13:05:24.844056  431885 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 13:05:24.850773  431885 system_pods.go:59] 7 kube-system pods found
	I1019 13:05:24.850810  431885 system_pods.go:61] "coredns-66bc5c9577-9fkgs" [bc66b89c-607e-43bb-bf8d-cd5963f3e7df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:05:24.850819  431885 system_pods.go:61] "etcd-pause-052658" [869c76be-f363-43df-a65f-495af9c817d4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:05:24.850824  431885 system_pods.go:61] "kindnet-58smf" [a0499250-d06a-41f4-9f84-bc7972eb976b] Running
	I1019 13:05:24.850831  431885 system_pods.go:61] "kube-apiserver-pause-052658" [14d2e5a9-d322-4636-8cf0-ab7c8d49f95a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:05:24.850844  431885 system_pods.go:61] "kube-controller-manager-pause-052658" [13e451b2-eb9f-411e-a150-4d58198964f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 13:05:24.850854  431885 system_pods.go:61] "kube-proxy-8xzhr" [02e96e3a-3380-49f9-b471-ec534e19fe43] Running
	I1019 13:05:24.850861  431885 system_pods.go:61] "kube-scheduler-pause-052658" [4eee30ae-e88b-48a7-8e79-71cfa6b2ec5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 13:05:24.850867  431885 system_pods.go:74] duration metric: took 6.791261ms to wait for pod list to return data ...
	I1019 13:05:24.850876  431885 default_sa.go:34] waiting for default service account to be created ...
	I1019 13:05:24.856545  431885 default_sa.go:45] found service account: "default"
	I1019 13:05:24.856570  431885 default_sa.go:55] duration metric: took 5.688271ms for default service account to be created ...
	I1019 13:05:24.856579  431885 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 13:05:24.862759  431885 system_pods.go:86] 7 kube-system pods found
	I1019 13:05:24.862856  431885 system_pods.go:89] "coredns-66bc5c9577-9fkgs" [bc66b89c-607e-43bb-bf8d-cd5963f3e7df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:05:24.862881  431885 system_pods.go:89] "etcd-pause-052658" [869c76be-f363-43df-a65f-495af9c817d4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:05:24.862927  431885 system_pods.go:89] "kindnet-58smf" [a0499250-d06a-41f4-9f84-bc7972eb976b] Running
	I1019 13:05:24.862955  431885 system_pods.go:89] "kube-apiserver-pause-052658" [14d2e5a9-d322-4636-8cf0-ab7c8d49f95a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:05:24.862987  431885 system_pods.go:89] "kube-controller-manager-pause-052658" [13e451b2-eb9f-411e-a150-4d58198964f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 13:05:24.863023  431885 system_pods.go:89] "kube-proxy-8xzhr" [02e96e3a-3380-49f9-b471-ec534e19fe43] Running
	I1019 13:05:24.863045  431885 system_pods.go:89] "kube-scheduler-pause-052658" [4eee30ae-e88b-48a7-8e79-71cfa6b2ec5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 13:05:24.863067  431885 system_pods.go:126] duration metric: took 6.481711ms to wait for k8s-apps to be running ...
	I1019 13:05:24.863108  431885 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 13:05:24.863195  431885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:05:24.880981  431885 system_svc.go:56] duration metric: took 17.864045ms WaitForService to wait for kubelet
	I1019 13:05:24.881071  431885 kubeadm.go:586] duration metric: took 6.116523713s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:05:24.881106  431885 node_conditions.go:102] verifying NodePressure condition ...
	I1019 13:05:24.886979  431885 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 13:05:24.887061  431885 node_conditions.go:123] node cpu capacity is 2
	I1019 13:05:24.887087  431885 node_conditions.go:105] duration metric: took 5.946236ms to run NodePressure ...
	I1019 13:05:24.887112  431885 start.go:241] waiting for startup goroutines ...
	I1019 13:05:24.887148  431885 start.go:246] waiting for cluster config update ...
	I1019 13:05:24.887175  431885 start.go:255] writing updated cluster config ...
	I1019 13:05:24.887543  431885 ssh_runner.go:195] Run: rm -f paused
	I1019 13:05:24.899776  431885 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:05:24.900365  431885 kapi.go:59] client config for pause-052658: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-292654/.minikube/profiles/pause-052658/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-292654/.minikube/profiles/pause-052658/client.key", CAFile:"/home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21201f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 13:05:24.903320  431885 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9fkgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:24.358246  433679 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 13:05:24.358489  433679 start.go:159] libmachine.API.Create for "kubernetes-upgrade-104724" (driver="docker")
	I1019 13:05:24.358534  433679 client.go:168] LocalClient.Create starting
	I1019 13:05:24.358617  433679 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem
	I1019 13:05:24.358652  433679 main.go:141] libmachine: Decoding PEM data...
	I1019 13:05:24.358665  433679 main.go:141] libmachine: Parsing certificate...
	I1019 13:05:24.358717  433679 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem
	I1019 13:05:24.358734  433679 main.go:141] libmachine: Decoding PEM data...
	I1019 13:05:24.358750  433679 main.go:141] libmachine: Parsing certificate...
	I1019 13:05:24.359123  433679 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-104724 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 13:05:24.377468  433679 cli_runner.go:211] docker network inspect kubernetes-upgrade-104724 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 13:05:24.377555  433679 network_create.go:284] running [docker network inspect kubernetes-upgrade-104724] to gather additional debugging logs...
	I1019 13:05:24.377572  433679 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-104724
	W1019 13:05:24.398680  433679 cli_runner.go:211] docker network inspect kubernetes-upgrade-104724 returned with exit code 1
	I1019 13:05:24.398718  433679 network_create.go:287] error running [docker network inspect kubernetes-upgrade-104724]: docker network inspect kubernetes-upgrade-104724: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-104724 not found
	I1019 13:05:24.398731  433679 network_create.go:289] output of [docker network inspect kubernetes-upgrade-104724]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-104724 not found
	
	** /stderr **
	I1019 13:05:24.398837  433679 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:05:24.419702  433679 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-319c97358c5c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2a:99:c3:44:12:51} reservation:<nil>}
	I1019 13:05:24.420004  433679 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5c09b33e0936 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:93:4b:f6:fd:1c} reservation:<nil>}
	I1019 13:05:24.420345  433679 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2c2bbaadd4a8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:8f:96:27:48:2c} reservation:<nil>}
	I1019 13:05:24.420618  433679 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-940a40da5d48 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:12:99:02:79:b8:9e} reservation:<nil>}
	I1019 13:05:24.421009  433679 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d9ff0}
	I1019 13:05:24.421026  433679 network_create.go:124] attempt to create docker network kubernetes-upgrade-104724 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1019 13:05:24.421084  433679 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-104724 kubernetes-upgrade-104724
	I1019 13:05:24.492304  433679 network_create.go:108] docker network kubernetes-upgrade-104724 192.168.85.0/24 created
	I1019 13:05:24.492332  433679 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-104724" container
	I1019 13:05:24.492421  433679 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 13:05:24.508979  433679 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-104724 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-104724 --label created_by.minikube.sigs.k8s.io=true
	I1019 13:05:24.529808  433679 oci.go:103] Successfully created a docker volume kubernetes-upgrade-104724
	I1019 13:05:24.529907  433679 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-104724-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-104724 --entrypoint /usr/bin/test -v kubernetes-upgrade-104724:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 13:05:25.138321  433679 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-104724
	I1019 13:05:25.138371  433679 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 13:05:25.138393  433679 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 13:05:25.138460  433679 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-104724:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1019 13:05:26.910504  431885 pod_ready.go:104] pod "coredns-66bc5c9577-9fkgs" is not "Ready", error: <nil>
	W1019 13:05:29.409410  431885 pod_ready.go:104] pod "coredns-66bc5c9577-9fkgs" is not "Ready", error: <nil>
	I1019 13:05:29.909559  431885 pod_ready.go:94] pod "coredns-66bc5c9577-9fkgs" is "Ready"
	I1019 13:05:29.909591  431885 pod_ready.go:86] duration metric: took 5.006249898s for pod "coredns-66bc5c9577-9fkgs" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:29.912419  431885 pod_ready.go:83] waiting for pod "etcd-pause-052658" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:30.105989  433679 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-104724:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.967458731s)
	I1019 13:05:30.106032  433679 kic.go:203] duration metric: took 4.967629359s to extract preloaded images to volume ...
	W1019 13:05:30.106194  433679 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 13:05:30.106313  433679 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 13:05:30.183828  433679 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-104724 --name kubernetes-upgrade-104724 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-104724 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-104724 --network kubernetes-upgrade-104724 --ip 192.168.85.2 --volume kubernetes-upgrade-104724:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 13:05:30.498016  433679 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-104724 --format={{.State.Running}}
	I1019 13:05:30.520881  433679 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-104724 --format={{.State.Status}}
	I1019 13:05:30.546770  433679 cli_runner.go:164] Run: docker exec kubernetes-upgrade-104724 stat /var/lib/dpkg/alternatives/iptables
	I1019 13:05:30.606425  433679 oci.go:144] the created container "kubernetes-upgrade-104724" has a running status.
	I1019 13:05:30.606460  433679 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/kubernetes-upgrade-104724/id_rsa...
	I1019 13:05:31.327475  433679 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-292654/.minikube/machines/kubernetes-upgrade-104724/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 13:05:31.346693  433679 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-104724 --format={{.State.Status}}
	I1019 13:05:31.362800  433679 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 13:05:31.362823  433679 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-104724 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 13:05:31.404734  433679 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-104724 --format={{.State.Status}}
	I1019 13:05:31.425166  433679 machine.go:93] provisionDockerMachine start ...
	I1019 13:05:31.425270  433679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104724
	I1019 13:05:31.443108  433679 main.go:141] libmachine: Using SSH client type: native
	I1019 13:05:31.443439  433679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33358 <nil> <nil>}
	I1019 13:05:31.443455  433679 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 13:05:31.444072  433679 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1019 13:05:31.917534  431885 pod_ready.go:104] pod "etcd-pause-052658" is not "Ready", error: <nil>
	I1019 13:05:32.417770  431885 pod_ready.go:94] pod "etcd-pause-052658" is "Ready"
	I1019 13:05:32.417797  431885 pod_ready.go:86] duration metric: took 2.505353857s for pod "etcd-pause-052658" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:32.420176  431885 pod_ready.go:83] waiting for pod "kube-apiserver-pause-052658" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:32.424861  431885 pod_ready.go:94] pod "kube-apiserver-pause-052658" is "Ready"
	I1019 13:05:32.424893  431885 pod_ready.go:86] duration metric: took 4.684293ms for pod "kube-apiserver-pause-052658" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:32.427387  431885 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-052658" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:32.432166  431885 pod_ready.go:94] pod "kube-controller-manager-pause-052658" is "Ready"
	I1019 13:05:32.432193  431885 pod_ready.go:86] duration metric: took 4.777636ms for pod "kube-controller-manager-pause-052658" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:32.434396  431885 pod_ready.go:83] waiting for pod "kube-proxy-8xzhr" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:32.708221  431885 pod_ready.go:94] pod "kube-proxy-8xzhr" is "Ready"
	I1019 13:05:32.708297  431885 pod_ready.go:86] duration metric: took 273.876153ms for pod "kube-proxy-8xzhr" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:32.907427  431885 pod_ready.go:83] waiting for pod "kube-scheduler-pause-052658" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:34.914105  431885 pod_ready.go:94] pod "kube-scheduler-pause-052658" is "Ready"
	I1019 13:05:34.914136  431885 pod_ready.go:86] duration metric: took 2.006678618s for pod "kube-scheduler-pause-052658" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:05:34.914148  431885 pod_ready.go:40] duration metric: took 10.014311746s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:05:34.974531  431885 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 13:05:34.978438  431885 out.go:179] * Done! kubectl is now configured to use "pause-052658" cluster and "default" namespace by default
	I1019 13:05:34.609622  433679 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-104724
	
	I1019 13:05:34.609650  433679 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-104724"
	I1019 13:05:34.609738  433679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104724
	I1019 13:05:34.629106  433679 main.go:141] libmachine: Using SSH client type: native
	I1019 13:05:34.629483  433679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33358 <nil> <nil>}
	I1019 13:05:34.629500  433679 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-104724 && echo "kubernetes-upgrade-104724" | sudo tee /etc/hostname
	I1019 13:05:34.787448  433679 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-104724
	
	I1019 13:05:34.787529  433679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104724
	I1019 13:05:34.805370  433679 main.go:141] libmachine: Using SSH client type: native
	I1019 13:05:34.805710  433679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33358 <nil> <nil>}
	I1019 13:05:34.805752  433679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-104724' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-104724/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-104724' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 13:05:34.959643  433679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 13:05:34.959678  433679 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 13:05:34.959698  433679 ubuntu.go:190] setting up certificates
	I1019 13:05:34.959707  433679 provision.go:84] configureAuth start
	I1019 13:05:34.959767  433679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-104724
	I1019 13:05:34.980487  433679 provision.go:143] copyHostCerts
	I1019 13:05:34.980574  433679 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem, removing ...
	I1019 13:05:34.980587  433679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem
	I1019 13:05:34.980660  433679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 13:05:34.980756  433679 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem, removing ...
	I1019 13:05:34.980761  433679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem
	I1019 13:05:34.980791  433679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 13:05:34.980875  433679 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem, removing ...
	I1019 13:05:34.980879  433679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem
	I1019 13:05:34.980904  433679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 13:05:34.980957  433679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-104724 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-104724 localhost minikube]
	I1019 13:05:35.470472  433679 provision.go:177] copyRemoteCerts
	I1019 13:05:35.470593  433679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 13:05:35.470664  433679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104724
	I1019 13:05:35.499866  433679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33358 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/kubernetes-upgrade-104724/id_rsa Username:docker}
	I1019 13:05:35.623309  433679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 13:05:35.646661  433679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1019 13:05:35.666519  433679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 13:05:35.685418  433679 provision.go:87] duration metric: took 725.686581ms to configureAuth
	I1019 13:05:35.685445  433679 ubuntu.go:206] setting minikube options for container-runtime
	I1019 13:05:35.685630  433679 config.go:182] Loaded profile config "kubernetes-upgrade-104724": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 13:05:35.685836  433679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104724
	I1019 13:05:35.708630  433679 main.go:141] libmachine: Using SSH client type: native
	I1019 13:05:35.708933  433679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33358 <nil> <nil>}
	I1019 13:05:35.708948  433679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 13:05:36.063946  433679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 13:05:36.063972  433679 machine.go:96] duration metric: took 4.6387842s to provisionDockerMachine
	I1019 13:05:36.063983  433679 client.go:171] duration metric: took 11.7054422s to LocalClient.Create
	I1019 13:05:36.063995  433679 start.go:167] duration metric: took 11.7055074s to libmachine.API.Create "kubernetes-upgrade-104724"
	I1019 13:05:36.064002  433679 start.go:293] postStartSetup for "kubernetes-upgrade-104724" (driver="docker")
	I1019 13:05:36.064012  433679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 13:05:36.064091  433679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 13:05:36.064138  433679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104724
	I1019 13:05:36.095211  433679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33358 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/kubernetes-upgrade-104724/id_rsa Username:docker}
	I1019 13:05:36.207589  433679 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 13:05:36.215040  433679 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 13:05:36.215081  433679 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 13:05:36.215095  433679 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/addons for local assets ...
	I1019 13:05:36.215154  433679 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/files for local assets ...
	I1019 13:05:36.215239  433679 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem -> 2945182.pem in /etc/ssl/certs
	I1019 13:05:36.215338  433679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 13:05:36.225612  433679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:05:36.246612  433679 start.go:296] duration metric: took 182.596169ms for postStartSetup
	I1019 13:05:36.247061  433679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-104724
	I1019 13:05:36.267904  433679 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/config.json ...
	I1019 13:05:36.268188  433679 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 13:05:36.268230  433679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104724
	I1019 13:05:36.289770  433679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33358 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/kubernetes-upgrade-104724/id_rsa Username:docker}
	I1019 13:05:36.394948  433679 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 13:05:36.400098  433679 start.go:128] duration metric: took 12.045163556s to createHost
	I1019 13:05:36.400174  433679 start.go:83] releasing machines lock for "kubernetes-upgrade-104724", held for 12.045360613s
	I1019 13:05:36.400269  433679 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-104724
	I1019 13:05:36.418511  433679 ssh_runner.go:195] Run: cat /version.json
	I1019 13:05:36.418568  433679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104724
	I1019 13:05:36.418626  433679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 13:05:36.418694  433679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104724
	I1019 13:05:36.437187  433679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33358 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/kubernetes-upgrade-104724/id_rsa Username:docker}
	I1019 13:05:36.443247  433679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33358 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/kubernetes-upgrade-104724/id_rsa Username:docker}
	I1019 13:05:36.546387  433679 ssh_runner.go:195] Run: systemctl --version
	I1019 13:05:36.640102  433679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 13:05:36.679251  433679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 13:05:36.686450  433679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 13:05:36.686520  433679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 13:05:36.725459  433679 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 13:05:36.725483  433679 start.go:495] detecting cgroup driver to use...
	I1019 13:05:36.725515  433679 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 13:05:36.725565  433679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 13:05:36.750118  433679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 13:05:36.767955  433679 docker.go:218] disabling cri-docker service (if available) ...
	I1019 13:05:36.768015  433679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 13:05:36.793575  433679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 13:05:36.812085  433679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 13:05:36.974911  433679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 13:05:37.113802  433679 docker.go:234] disabling docker service ...
	I1019 13:05:37.113921  433679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 13:05:37.139545  433679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 13:05:37.155482  433679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 13:05:37.280567  433679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 13:05:37.397969  433679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 13:05:37.411655  433679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 13:05:37.426136  433679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1019 13:05:37.426238  433679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:37.435416  433679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 13:05:37.435528  433679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:37.444664  433679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:37.453998  433679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:37.462875  433679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 13:05:37.471494  433679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:37.480993  433679 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:37.495041  433679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:05:37.507694  433679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 13:05:37.528945  433679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 13:05:37.539960  433679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:05:37.694150  433679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 13:05:37.852113  433679 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 13:05:37.852186  433679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 13:05:37.858285  433679 start.go:563] Will wait 60s for crictl version
	I1019 13:05:37.858351  433679 ssh_runner.go:195] Run: which crictl
	I1019 13:05:37.863890  433679 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 13:05:37.899036  433679 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 13:05:37.899116  433679 ssh_runner.go:195] Run: crio --version
	I1019 13:05:37.936981  433679 ssh_runner.go:195] Run: crio --version
	I1019 13:05:37.984183  433679 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1019 13:05:37.987062  433679 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-104724 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:05:38.012640  433679 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 13:05:38.017146  433679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:05:38.029008  433679 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-104724 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-104724 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 13:05:38.029141  433679 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 13:05:38.029203  433679 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:05:38.073510  433679 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:05:38.073538  433679 crio.go:433] Images already preloaded, skipping extraction
	I1019 13:05:38.073597  433679 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:05:38.114002  433679 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:05:38.114023  433679 cache_images.go:85] Images are preloaded, skipping loading
	I1019 13:05:38.114032  433679 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1019 13:05:38.114115  433679 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-104724 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-104724 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 13:05:38.114192  433679 ssh_runner.go:195] Run: crio config
	I1019 13:05:38.207145  433679 cni.go:84] Creating CNI manager for ""
	I1019 13:05:38.207231  433679 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:05:38.207305  433679 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 13:05:38.207359  433679 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-104724 NodeName:kubernetes-upgrade-104724 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 13:05:38.207595  433679 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-104724"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 13:05:38.207719  433679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1019 13:05:38.218700  433679 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 13:05:38.218771  433679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 13:05:38.228917  433679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1019 13:05:38.250128  433679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 13:05:38.271140  433679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I1019 13:05:38.285292  433679 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 13:05:38.290383  433679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:05:38.300445  433679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:05:38.423827  433679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:05:38.443924  433679 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724 for IP: 192.168.85.2
	I1019 13:05:38.443948  433679 certs.go:195] generating shared ca certs ...
	I1019 13:05:38.443964  433679 certs.go:227] acquiring lock for ca certs: {Name:mk8f2f1c683cf5104ef70f6f3d59bf8f6240d633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:05:38.444099  433679 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key
	I1019 13:05:38.444146  433679 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key
	I1019 13:05:38.444158  433679 certs.go:257] generating profile certs ...
	I1019 13:05:38.444215  433679 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/client.key
	I1019 13:05:38.444233  433679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/client.crt with IP's: []
	I1019 13:05:38.837237  433679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/client.crt ...
	I1019 13:05:38.837310  433679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/client.crt: {Name:mk33b9ef2d760808e59a472c345b0a81670af15e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:05:38.837549  433679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/client.key ...
	I1019 13:05:38.837593  433679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/client.key: {Name:mk539cf0d221cd5252c22dbdf1d64a64c701151e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:05:38.837759  433679 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/apiserver.key.f796f1f2
	I1019 13:05:38.837799  433679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/apiserver.crt.f796f1f2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	
	
	==> CRI-O <==
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.718986319Z" level=info msg="Started container" PID=2281 containerID=8bdbd9430d1867563c01e9db16c16d8bfc47dfbd4064de68b62e3c608fc7b2e8 description=kube-system/kube-apiserver-pause-052658/kube-apiserver id=69ad1df2-e73b-453c-903c-d32f7f040258 name=/runtime.v1.RuntimeService/StartContainer sandboxID=28c7e1a3bbecbce188e478e8fe8d4018dc00bf478a6c2b05dd578c3c0c4af827
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.740180078Z" level=info msg="Created container d7ffff287898431d46f269ae1eba7808cb5fa242b40b83b1d32861a66655d7a8: kube-system/etcd-pause-052658/etcd" id=60330a5d-1de0-4fe7-991e-0b55b179cff9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.741920296Z" level=info msg="Created container ba8f62ba490de8933a05b5c6dbf528ace592eba1da16695b3e24170c833da729: kube-system/kube-controller-manager-pause-052658/kube-controller-manager" id=4a31e9d2-25f1-4d10-bc87-8f884232c13b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.744298381Z" level=info msg="Starting container: d7ffff287898431d46f269ae1eba7808cb5fa242b40b83b1d32861a66655d7a8" id=b7b2ffa0-84b7-4f36-93d9-2c559dfaac52 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.745167916Z" level=info msg="Starting container: ba8f62ba490de8933a05b5c6dbf528ace592eba1da16695b3e24170c833da729" id=f462a771-f90d-47ac-a4a5-4df5458976c5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.776413915Z" level=info msg="Started container" PID=2293 containerID=ba8f62ba490de8933a05b5c6dbf528ace592eba1da16695b3e24170c833da729 description=kube-system/kube-controller-manager-pause-052658/kube-controller-manager id=f462a771-f90d-47ac-a4a5-4df5458976c5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=488f88c5437af9ab0686d7a9841e0b312caf8ec2a951f5ecf7c0f9171afe9fc7
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.785965528Z" level=info msg="Started container" PID=2292 containerID=d7ffff287898431d46f269ae1eba7808cb5fa242b40b83b1d32861a66655d7a8 description=kube-system/etcd-pause-052658/etcd id=b7b2ffa0-84b7-4f36-93d9-2c559dfaac52 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a433cdae0dd2f23a31c02b2a415bd577847dc4f8c5f3bec80d9712ab5043f381
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.840400745Z" level=info msg="Created container c743f2b2cc5739d6f671d60d15ea27e27dfa0dc935153abf39b1ade383be12c8: kube-system/kube-scheduler-pause-052658/kube-scheduler" id=b682d546-4ddd-4016-9dfa-7f15a05de5b7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.841703779Z" level=info msg="Starting container: c743f2b2cc5739d6f671d60d15ea27e27dfa0dc935153abf39b1ade383be12c8" id=6977f4a6-3e47-4bd4-a737-7cff21959d65 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.864324611Z" level=info msg="Started container" PID=2324 containerID=c743f2b2cc5739d6f671d60d15ea27e27dfa0dc935153abf39b1ade383be12c8 description=kube-system/kube-scheduler-pause-052658/kube-scheduler id=6977f4a6-3e47-4bd4-a737-7cff21959d65 name=/runtime.v1.RuntimeService/StartContainer sandboxID=48128dfc49e4b46422d0cb07dc95debfa03146d43a568b84ea2d5712ea759521
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.890065439Z" level=info msg="Created container 962513f0b7d745c9f24d3922de11904ff5dac0b2b94327d9b2481cfa5d29c246: kube-system/kube-proxy-8xzhr/kube-proxy" id=36454d8e-3a32-4c55-b38a-5197a9eac936 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.897045241Z" level=info msg="Starting container: 962513f0b7d745c9f24d3922de11904ff5dac0b2b94327d9b2481cfa5d29c246" id=4686da78-b137-404e-838b-633b94f9886a name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:05:17 pause-052658 crio[2065]: time="2025-10-19T13:05:17.90513312Z" level=info msg="Started container" PID=2321 containerID=962513f0b7d745c9f24d3922de11904ff5dac0b2b94327d9b2481cfa5d29c246 description=kube-system/kube-proxy-8xzhr/kube-proxy id=4686da78-b137-404e-838b-633b94f9886a name=/runtime.v1.RuntimeService/StartContainer sandboxID=3de2ee91a006f3aa634654e7652905e329163b0f70b9759b0a5ffcb36a5ecbca
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.055418138Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.063632657Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.063668333Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.063690512Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.075264932Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.07530467Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.075329737Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.0800277Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.080084095Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.080107201Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.084949938Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:05:28 pause-052658 crio[2065]: time="2025-10-19T13:05:28.084986755Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	c743f2b2cc573       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   24 seconds ago       Running             kube-scheduler            1                   48128dfc49e4b       kube-scheduler-pause-052658            kube-system
	962513f0b7d74       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   24 seconds ago       Running             kube-proxy                1                   3de2ee91a006f       kube-proxy-8xzhr                       kube-system
	ba8f62ba490de       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   24 seconds ago       Running             kube-controller-manager   1                   488f88c5437af       kube-controller-manager-pause-052658   kube-system
	d7ffff2878984       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   24 seconds ago       Running             etcd                      1                   a433cdae0dd2f       etcd-pause-052658                      kube-system
	8bdbd9430d186       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   24 seconds ago       Running             kube-apiserver            1                   28c7e1a3bbecb       kube-apiserver-pause-052658            kube-system
	9036d93a9870e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   f17b27f4129d6       kindnet-58smf                          kube-system
	c87f85518ffef       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   ec416a9934c79       coredns-66bc5c9577-9fkgs               kube-system
	9de130db3a61f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   38 seconds ago       Exited              coredns                   0                   ec416a9934c79       coredns-66bc5c9577-9fkgs               kube-system
	fa5349ebdab5a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   3de2ee91a006f       kube-proxy-8xzhr                       kube-system
	5d464678ea1d8       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   f17b27f4129d6       kindnet-58smf                          kube-system
	bb49a02b287e6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   48128dfc49e4b       kube-scheduler-pause-052658            kube-system
	0e93a892e96f5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   28c7e1a3bbecb       kube-apiserver-pause-052658            kube-system
	2f49b3722734e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   488f88c5437af       kube-controller-manager-pause-052658   kube-system
	d676f6db0dd2d       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   a433cdae0dd2f       etcd-pause-052658                      kube-system
	
	
	==> coredns [9de130db3a61f28e3afc80c22ab1dcda87eb80e3e5cad06bbdf1723cbbc02659] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34468 - 64387 "HINFO IN 5659924498181447867.7303966064205267563. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.085421142s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c87f85518ffefcd9ed464c1e8ec3f02cb34777237b1b757d35de45530e13d804] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38639 - 44256 "HINFO IN 3878672707910489575.893701343700373851. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.040468434s
	
	
	==> describe nodes <==
	Name:               pause-052658
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-052658
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=pause-052658
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T13_04_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 13:04:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-052658
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 13:05:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 13:05:02 +0000   Sun, 19 Oct 2025 13:04:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 13:05:02 +0000   Sun, 19 Oct 2025 13:04:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 13:05:02 +0000   Sun, 19 Oct 2025 13:04:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 13:05:02 +0000   Sun, 19 Oct 2025 13:05:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-052658
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                0bbe64b9-a531-48c8-b22c-19bed7ed16a9
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-9fkgs                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     82s
	  kube-system                 etcd-pause-052658                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         87s
	  kube-system                 kindnet-58smf                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      83s
	  kube-system                 kube-apiserver-pause-052658             250m (12%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-controller-manager-pause-052658    200m (10%)    0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-proxy-8xzhr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-scheduler-pause-052658             100m (5%)     0 (0%)      0 (0%)           0 (0%)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 79s                kube-proxy       
	  Normal   Starting                 18s                kube-proxy       
	  Warning  CgroupV1                 98s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  98s (x8 over 98s)  kubelet          Node pause-052658 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    98s (x8 over 98s)  kubelet          Node pause-052658 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     98s (x8 over 98s)  kubelet          Node pause-052658 status is now: NodeHasSufficientPID
	  Normal   Starting                 87s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 87s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  87s                kubelet          Node pause-052658 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    87s                kubelet          Node pause-052658 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     87s                kubelet          Node pause-052658 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           83s                node-controller  Node pause-052658 event: Registered Node pause-052658 in Controller
	  Normal   NodeReady                40s                kubelet          Node pause-052658 status is now: NodeReady
	  Warning  ContainerGCFailed        27s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           16s                node-controller  Node pause-052658 event: Registered Node pause-052658 in Controller
	
	
	==> dmesg <==
	[Oct19 12:39] overlayfs: idmapped layers are currently not supported
	[Oct19 12:40] overlayfs: idmapped layers are currently not supported
	[  +3.779280] overlayfs: idmapped layers are currently not supported
	[Oct19 12:41] overlayfs: idmapped layers are currently not supported
	[Oct19 12:42] overlayfs: idmapped layers are currently not supported
	[Oct19 12:43] overlayfs: idmapped layers are currently not supported
	[  +3.355153] overlayfs: idmapped layers are currently not supported
	[Oct19 12:44] overlayfs: idmapped layers are currently not supported
	[ +21.526979] overlayfs: idmapped layers are currently not supported
	[Oct19 12:46] overlayfs: idmapped layers are currently not supported
	[Oct19 12:50] overlayfs: idmapped layers are currently not supported
	[Oct19 12:51] overlayfs: idmapped layers are currently not supported
	[Oct19 12:52] overlayfs: idmapped layers are currently not supported
	[Oct19 12:53] overlayfs: idmapped layers are currently not supported
	[Oct19 12:54] overlayfs: idmapped layers are currently not supported
	[Oct19 12:56] overlayfs: idmapped layers are currently not supported
	[ +16.315179] overlayfs: idmapped layers are currently not supported
	[ +11.914063] overlayfs: idmapped layers are currently not supported
	[Oct19 12:57] overlayfs: idmapped layers are currently not supported
	[Oct19 12:58] overlayfs: idmapped layers are currently not supported
	[ +48.481184] overlayfs: idmapped layers are currently not supported
	[Oct19 12:59] overlayfs: idmapped layers are currently not supported
	[Oct19 13:00] overlayfs: idmapped layers are currently not supported
	[Oct19 13:01] overlayfs: idmapped layers are currently not supported
	[Oct19 13:04] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d676f6db0dd2dacfd3bf4b36c2ba236c4e1cae0c8626d009575ea36888e03436] <==
	{"level":"warn","ts":"2025-10-19T13:04:09.254612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:04:09.294886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:04:09.367455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:04:09.418032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:04:09.470261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:04:09.505761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:04:09.732780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48986","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T13:05:08.699382Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-19T13:05:08.699435Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-052658","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-19T13:05:08.699543Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T13:05:08.699600Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"info","ts":"2025-10-19T13:05:08.867141Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-19T13:05:08.867271Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-19T13:05:08.867291Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-10-19T13:05:08.867077Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-19T13:05:08.867602Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T13:05:08.867635Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T13:05:08.867643Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-19T13:05:08.867688Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T13:05:08.867702Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T13:05:08.867709Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T13:05:08.870606Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-19T13:05:08.870680Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T13:05:08.870717Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-19T13:05:08.870830Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-052658","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [d7ffff287898431d46f269ae1eba7808cb5fa242b40b83b1d32861a66655d7a8] <==
	{"level":"warn","ts":"2025-10-19T13:05:21.480150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.509433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.587475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.632975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.670003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.727922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.751842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.774317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.786574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.847748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.918627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:21.954037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.032721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.035244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.054557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.093777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.131706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.170510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.184904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.198730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.282343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.302328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.352050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.360189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:05:22.535796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36662","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:05:42 up  2:48,  0 user,  load average: 4.12, 2.69, 2.38
	Linux pause-052658 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5d464678ea1d810867398d806ef9ecea0b7e7e536a9ccd4a7598f0cb18a5d5e8] <==
	I1019 13:04:21.914054       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:04:21.914948       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 13:04:21.915123       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:04:21.915235       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:04:21.915279       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:04:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:04:22.134519       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:04:22.134547       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:04:22.134555       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:04:22.135470       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 13:04:52.135277       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 13:04:52.135386       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1019 13:04:52.135459       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 13:04:52.135595       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1019 13:04:53.335171       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 13:04:53.335287       1 metrics.go:72] Registering metrics
	I1019 13:04:53.335368       1 controller.go:711] "Syncing nftables rules"
	I1019 13:05:02.134850       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:05:02.134906       1 main.go:301] handling current node
	
	
	==> kindnet [9036d93a9870e51a8553d29c237178734288ec8578cd01fe4a9d30733a29a989] <==
	I1019 13:05:17.763393       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:05:17.765062       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 13:05:17.765850       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:05:17.805743       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:05:17.805784       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:05:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:05:18.050640       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:05:18.050668       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:05:18.050679       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:05:18.057998       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 13:05:23.853768       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 13:05:23.853861       1 metrics.go:72] Registering metrics
	I1019 13:05:23.860956       1 controller.go:711] "Syncing nftables rules"
	I1019 13:05:28.051233       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:05:28.051281       1 main.go:301] handling current node
	I1019 13:05:38.050841       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:05:38.050881       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0e93a892e96f5ce20eb832477b72857cd295910746fafbd1f048bbf773aaaed1] <==
	W1019 13:05:08.736900       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.736954       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737004       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.736758       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737011       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737132       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737187       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737223       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737288       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737340       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737378       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737435       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737490       1 logging.go:55] [core] [Channel #8 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737532       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737589       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737641       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737672       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.736612       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737497       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737347       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.736150       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.736873       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737193       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737804       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1019 13:05:08.737840       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [8bdbd9430d1867563c01e9db16c16d8bfc47dfbd4064de68b62e3c608fc7b2e8] <==
	I1019 13:05:23.730439       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 13:05:23.730455       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 13:05:23.734378       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 13:05:23.739198       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 13:05:23.739310       1 policy_source.go:240] refreshing policies
	I1019 13:05:23.748031       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 13:05:23.748265       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 13:05:23.749662       1 aggregator.go:171] initial CRD sync complete...
	I1019 13:05:23.751022       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 13:05:23.751099       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 13:05:23.751131       1 cache.go:39] Caches are synced for autoregister controller
	I1019 13:05:23.751385       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1019 13:05:23.751450       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 13:05:23.775546       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 13:05:23.788594       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 13:05:23.789924       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:05:23.794215       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 13:05:23.825276       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1019 13:05:23.894385       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 13:05:24.227571       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 13:05:24.777503       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 13:05:26.173579       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 13:05:26.271842       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 13:05:26.323967       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 13:05:26.424521       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [2f49b3722734ec5fa7cb1b7440bec821f2cfc59804041aba24306e9dcc504795] <==
	I1019 13:04:19.091710       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 13:04:19.091741       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 13:04:19.091767       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 13:04:19.103865       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:04:19.103966       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 13:04:19.103996       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 13:04:19.108653       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-052658" podCIDRs=["10.244.0.0/24"]
	I1019 13:04:19.109896       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 13:04:19.110560       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 13:04:19.120124       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 13:04:19.121358       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 13:04:19.121467       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 13:04:19.121517       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 13:04:19.121780       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 13:04:19.122526       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 13:04:19.123098       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 13:04:19.123258       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 13:04:19.123159       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 13:04:19.125087       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 13:04:19.125178       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 13:04:19.126454       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 13:04:19.128499       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 13:04:19.135077       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:04:19.135187       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:05:04.077634       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [ba8f62ba490de8933a05b5c6dbf528ace592eba1da16695b3e24170c833da729] <==
	I1019 13:05:26.034034       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 13:05:26.040164       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 13:05:26.049172       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 13:05:26.049310       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 13:05:26.049373       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 13:05:26.049405       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 13:05:26.049432       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 13:05:26.049535       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 13:05:26.054055       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 13:05:26.057349       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 13:05:26.057756       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 13:05:26.063866       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 13:05:26.065434       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:05:26.065702       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 13:05:26.071108       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:05:26.071196       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 13:05:26.071228       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 13:05:26.076957       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 13:05:26.076969       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 13:05:26.076988       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 13:05:26.082587       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 13:05:26.086893       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 13:05:26.093171       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 13:05:26.101472       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 13:05:26.104866       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	
	
	==> kube-proxy [962513f0b7d745c9f24d3922de11904ff5dac0b2b94327d9b2481cfa5d29c246] <==
	I1019 13:05:21.478065       1 server_linux.go:53] "Using iptables proxy"
	I1019 13:05:22.502210       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 13:05:23.903537       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 13:05:23.903654       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 13:05:23.903802       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 13:05:23.978859       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:05:23.978987       1 server_linux.go:132] "Using iptables Proxier"
	I1019 13:05:23.993455       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 13:05:23.994063       1 server.go:527] "Version info" version="v1.34.1"
	I1019 13:05:23.994291       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:05:24.001120       1 config.go:200] "Starting service config controller"
	I1019 13:05:24.001233       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 13:05:24.001277       1 config.go:106] "Starting endpoint slice config controller"
	I1019 13:05:24.001321       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 13:05:24.001377       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 13:05:24.001415       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 13:05:24.007469       1 config.go:309] "Starting node config controller"
	I1019 13:05:24.007562       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 13:05:24.007595       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 13:05:24.101376       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 13:05:24.101457       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 13:05:24.101469       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [fa5349ebdab5aa344012950f607a1526ac8a79065f14d86c23329d96790f97a2] <==
	I1019 13:04:22.409866       1 server_linux.go:53] "Using iptables proxy"
	I1019 13:04:22.499188       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 13:04:22.599996       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 13:04:22.600042       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 13:04:22.600126       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 13:04:22.626326       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:04:22.626445       1 server_linux.go:132] "Using iptables Proxier"
	I1019 13:04:22.634658       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 13:04:22.635030       1 server.go:527] "Version info" version="v1.34.1"
	I1019 13:04:22.635239       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:04:22.636491       1 config.go:200] "Starting service config controller"
	I1019 13:04:22.636563       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 13:04:22.636606       1 config.go:106] "Starting endpoint slice config controller"
	I1019 13:04:22.636634       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 13:04:22.636667       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 13:04:22.636695       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 13:04:22.637332       1 config.go:309] "Starting node config controller"
	I1019 13:04:22.644328       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 13:04:22.644408       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 13:04:22.737548       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 13:04:22.737650       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 13:04:22.737688       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bb49a02b287e654e3bf830c5ec876e1c796bfe354b6a4345250db63f8963a09b] <==
	I1019 13:04:09.072429       1 serving.go:386] Generated self-signed cert in-memory
	W1019 13:04:13.490400       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 13:04:13.490430       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 13:04:13.490441       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 13:04:13.490448       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 13:04:13.552754       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 13:04:13.556375       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:04:13.564696       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 13:04:13.564807       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:04:13.564832       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:04:13.564859       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1019 13:04:13.592860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1019 13:04:14.565092       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:05:08.717019       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1019 13:05:08.717047       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1019 13:05:08.717082       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1019 13:05:08.717109       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:05:08.717396       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1019 13:05:08.717435       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c743f2b2cc5739d6f671d60d15ea27e27dfa0dc935153abf39b1ade383be12c8] <==
	I1019 13:05:23.543072       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:05:23.545622       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 13:05:23.555064       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 13:05:23.555147       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:05:23.567152       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1019 13:05:23.668816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 13:05:23.669278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 13:05:23.669351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 13:05:23.669435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 13:05:23.669496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 13:05:23.669551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 13:05:23.669607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 13:05:23.669661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 13:05:23.670673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 13:05:23.670928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 13:05:23.671916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 13:05:23.672554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 13:05:23.672698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 13:05:23.676172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 13:05:23.676212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 13:05:23.676241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 13:05:23.676401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 13:05:23.676514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 13:05:23.694278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1019 13:05:25.269822       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.521989    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-58smf\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a0499250-d06a-41f4-9f84-bc7972eb976b" pod="kube-system/kindnet-58smf"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.522176    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-9fkgs\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bc66b89c-607e-43bb-bf8d-cd5963f3e7df" pod="kube-system/coredns-66bc5c9577-9fkgs"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: I1019 13:05:17.522956    1311 scope.go:117] "RemoveContainer" containerID="bb49a02b287e654e3bf830c5ec876e1c796bfe354b6a4345250db63f8963a09b"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.523560    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-052658\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b8849b9e91fc6b3da9ce2ba93ecc23ce" pod="kube-system/kube-controller-manager-pause-052658"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.523785    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-052658\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7cad4830782aab0ed630ed2b840cc95c" pod="kube-system/kube-scheduler-pause-052658"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.524025    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-052658\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5df87c02bdd2a663aba9a0886d071fc3" pod="kube-system/etcd-pause-052658"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.524263    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-052658\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="227aee2d66fe89c7a2e9965aa151eb74" pod="kube-system/kube-apiserver-pause-052658"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.524485    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-58smf\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a0499250-d06a-41f4-9f84-bc7972eb976b" pod="kube-system/kindnet-58smf"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.524696    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-9fkgs\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bc66b89c-607e-43bb-bf8d-cd5963f3e7df" pod="kube-system/coredns-66bc5c9577-9fkgs"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: I1019 13:05:17.557933    1311 scope.go:117] "RemoveContainer" containerID="fa5349ebdab5aa344012950f607a1526ac8a79065f14d86c23329d96790f97a2"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.558298    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xzhr\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="02e96e3a-3380-49f9-b471-ec534e19fe43" pod="kube-system/kube-proxy-8xzhr"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.558628    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-58smf\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="a0499250-d06a-41f4-9f84-bc7972eb976b" pod="kube-system/kindnet-58smf"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.558873    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-9fkgs\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="bc66b89c-607e-43bb-bf8d-cd5963f3e7df" pod="kube-system/coredns-66bc5c9577-9fkgs"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.559080    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-052658\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b8849b9e91fc6b3da9ce2ba93ecc23ce" pod="kube-system/kube-controller-manager-pause-052658"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.559306    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-052658\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="7cad4830782aab0ed630ed2b840cc95c" pod="kube-system/kube-scheduler-pause-052658"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.559503    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-052658\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5df87c02bdd2a663aba9a0886d071fc3" pod="kube-system/etcd-pause-052658"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.559702    1311 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-052658\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="227aee2d66fe89c7a2e9965aa151eb74" pod="kube-system/kube-apiserver-pause-052658"
	Oct 19 13:05:17 pause-052658 kubelet[1311]: E1019 13:05:17.772884    1311 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{etcd-pause-052658.186fe6385f7597b2  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:etcd-pause-052658,UID:5df87c02bdd2a663aba9a0886d071fc3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://127.0.0.1:2381/readyz\": dial tcp 127.0.0.1:2381: connect: connection refused,Source:EventSource{Component:kubelet,Host:pause-052658,},FirstTimestamp:2025-10-19 13:05:09.119252402 +0000 UTC m=+54.288485472,LastTimestamp:2025-10-19 13:05:09.119252402 +0000 UTC m=+54.288485472,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,A
ction:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-052658,}"
	Oct 19 13:05:23 pause-052658 kubelet[1311]: E1019 13:05:23.502342    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-052658\" is forbidden: User \"system:node:pause-052658\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-052658' and this object" podUID="b8849b9e91fc6b3da9ce2ba93ecc23ce" pod="kube-system/kube-controller-manager-pause-052658"
	Oct 19 13:05:23 pause-052658 kubelet[1311]: E1019 13:05:23.502695    1311 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-052658\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-052658' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 19 13:05:23 pause-052658 kubelet[1311]: E1019 13:05:23.668483    1311 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-052658\" is forbidden: User \"system:node:pause-052658\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-052658' and this object" podUID="7cad4830782aab0ed630ed2b840cc95c" pod="kube-system/kube-scheduler-pause-052658"
	Oct 19 13:05:35 pause-052658 kubelet[1311]: W1019 13:05:35.514763    1311 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 19 13:05:35 pause-052658 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 13:05:35 pause-052658 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 13:05:35 pause-052658 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-052658 -n pause-052658
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-052658 -n pause-052658: exit status 2 (477.571941ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-052658 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (8.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-842494 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-842494 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (327.974439ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:13:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-842494 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-842494 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-842494 describe deploy/metrics-server -n kube-system: exit status 1 (108.595756ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-842494 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-842494
helpers_test.go:243: (dbg) docker inspect old-k8s-version-842494:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd",
	        "Created": "2025-10-19T13:12:36.220963555Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 472068,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T13:12:36.282486802Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd/hosts",
	        "LogPath": "/var/lib/docker/containers/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd-json.log",
	        "Name": "/old-k8s-version-842494",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-842494:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-842494",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd",
	                "LowerDir": "/var/lib/docker/overlay2/651a449c5b4e1673387a386a93fce51fb6365b65408215e08e645eaad452a977-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/651a449c5b4e1673387a386a93fce51fb6365b65408215e08e645eaad452a977/merged",
	                "UpperDir": "/var/lib/docker/overlay2/651a449c5b4e1673387a386a93fce51fb6365b65408215e08e645eaad452a977/diff",
	                "WorkDir": "/var/lib/docker/overlay2/651a449c5b4e1673387a386a93fce51fb6365b65408215e08e645eaad452a977/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-842494",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-842494/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-842494",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-842494",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-842494",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c2cbe574d243bf12735f7ab0ce81147eb4b02c9693a2ee0555f18221776ba2a6",
	            "SandboxKey": "/var/run/docker/netns/c2cbe574d243",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-842494": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:44:88:39:84:39",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b37065579aa71db7f1dc53707ff2b821c589305580e1e4d9a2a0c035d310ed82",
	                    "EndpointID": "89f266052eae397a0c3e50c372fb880e4e89b3946092c00ec7f15275325ef4af",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-842494",
	                        "143af978a0b4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-842494 -n old-k8s-version-842494
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-842494 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-842494 logs -n 25: (1.824355634s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-696007 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-696007             │ jenkins │ v1.37.0 │ 19 Oct 25 13:09 UTC │                     │
	│ ssh     │ -p cilium-696007 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-696007             │ jenkins │ v1.37.0 │ 19 Oct 25 13:09 UTC │                     │
	│ ssh     │ -p cilium-696007 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-696007             │ jenkins │ v1.37.0 │ 19 Oct 25 13:09 UTC │                     │
	│ ssh     │ -p cilium-696007 sudo containerd config dump                                                                                                                                                                                                  │ cilium-696007             │ jenkins │ v1.37.0 │ 19 Oct 25 13:09 UTC │                     │
	│ ssh     │ -p cilium-696007 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-696007             │ jenkins │ v1.37.0 │ 19 Oct 25 13:09 UTC │                     │
	│ ssh     │ -p cilium-696007 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-696007             │ jenkins │ v1.37.0 │ 19 Oct 25 13:09 UTC │                     │
	│ ssh     │ -p cilium-696007 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-696007             │ jenkins │ v1.37.0 │ 19 Oct 25 13:09 UTC │                     │
	│ ssh     │ -p cilium-696007 sudo crio config                                                                                                                                                                                                             │ cilium-696007             │ jenkins │ v1.37.0 │ 19 Oct 25 13:09 UTC │                     │
	│ delete  │ -p cilium-696007                                                                                                                                                                                                                              │ cilium-696007             │ jenkins │ v1.37.0 │ 19 Oct 25 13:09 UTC │ 19 Oct 25 13:09 UTC │
	│ start   │ -p cert-expiration-088393 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-088393    │ jenkins │ v1.37.0 │ 19 Oct 25 13:09 UTC │ 19 Oct 25 13:10 UTC │
	│ start   │ -p kubernetes-upgrade-104724 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-104724 │ jenkins │ v1.37.0 │ 19 Oct 25 13:10 UTC │                     │
	│ start   │ -p kubernetes-upgrade-104724 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-104724 │ jenkins │ v1.37.0 │ 19 Oct 25 13:10 UTC │ 19 Oct 25 13:11 UTC │
	│ delete  │ -p kubernetes-upgrade-104724                                                                                                                                                                                                                  │ kubernetes-upgrade-104724 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ start   │ -p force-systemd-flag-606072 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-606072 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ force-systemd-flag-606072 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-606072 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ delete  │ -p force-systemd-flag-606072                                                                                                                                                                                                                  │ force-systemd-flag-606072 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ start   │ -p cert-options-264135 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:12 UTC │
	│ ssh     │ cert-options-264135 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ ssh     │ -p cert-options-264135 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ delete  │ -p cert-options-264135                                                                                                                                                                                                                        │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ start   │ -p old-k8s-version-842494 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:13 UTC │
	│ start   │ -p cert-expiration-088393 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-088393    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:13 UTC │
	│ delete  │ -p cert-expiration-088393                                                                                                                                                                                                                     │ cert-expiration-088393    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:13 UTC │
	│ start   │ -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-842494 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 13:13:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 13:13:41.784823  475820 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:13:41.784944  475820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:13:41.784954  475820 out.go:374] Setting ErrFile to fd 2...
	I1019 13:13:41.784960  475820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:13:41.785195  475820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:13:41.785611  475820 out.go:368] Setting JSON to false
	I1019 13:13:41.786553  475820 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10572,"bootTime":1760869050,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 13:13:41.786623  475820 start.go:141] virtualization:  
	I1019 13:13:41.790618  475820 out.go:179] * [no-preload-108149] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 13:13:41.794110  475820 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:13:41.794159  475820 notify.go:220] Checking for updates...
	I1019 13:13:41.798186  475820 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:13:41.801384  475820 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:13:41.804495  475820 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 13:13:41.807721  475820 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 13:13:41.810818  475820 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:13:41.814493  475820 config.go:182] Loaded profile config "old-k8s-version-842494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 13:13:41.814623  475820 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:13:41.846400  475820 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 13:13:41.846525  475820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:13:41.909314  475820 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:13:41.899500277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:13:41.909420  475820 docker.go:318] overlay module found
	I1019 13:13:41.912734  475820 out.go:179] * Using the docker driver based on user configuration
	I1019 13:13:41.915737  475820 start.go:305] selected driver: docker
	I1019 13:13:41.915760  475820 start.go:925] validating driver "docker" against <nil>
	I1019 13:13:41.915775  475820 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:13:41.916547  475820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:13:41.982347  475820 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:13:41.972103169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:13:41.982515  475820 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 13:13:41.982756  475820 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:13:41.985796  475820 out.go:179] * Using Docker driver with root privileges
	I1019 13:13:41.988728  475820 cni.go:84] Creating CNI manager for ""
	I1019 13:13:41.988805  475820 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:13:41.988819  475820 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 13:13:41.988898  475820 start.go:349] cluster config:
	{Name:no-preload-108149 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-108149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:13:41.992104  475820 out.go:179] * Starting "no-preload-108149" primary control-plane node in "no-preload-108149" cluster
	I1019 13:13:41.994984  475820 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 13:13:41.997908  475820 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 13:13:42.003104  475820 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:13:42.003288  475820 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/config.json ...
	I1019 13:13:42.003347  475820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/config.json: {Name:mkeede23b6b2f977d01b5c25935b1df175a0bcb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:13:42.003763  475820 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 13:13:42.008426  475820 cache.go:107] acquiring lock: {Name:mk5a8d8c97028719cbe957e1da9da945a08129b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:13:42.008561  475820 cache.go:115] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1019 13:13:42.008573  475820 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 166.164µs
	I1019 13:13:42.008590  475820 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1019 13:13:42.008602  475820 cache.go:107] acquiring lock: {Name:mka319b8201ff42f7c4d5a909d9f20912ffd3c71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:13:42.008720  475820 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1019 13:13:42.008926  475820 cache.go:107] acquiring lock: {Name:mk88bf9cd976728e53957a14cba132c54a305706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:13:42.009001  475820 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1019 13:13:42.009101  475820 cache.go:107] acquiring lock: {Name:mkeff018c276f2dc7628871eceb8ffdfd4f5d5dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:13:42.009181  475820 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1019 13:13:42.009269  475820 cache.go:107] acquiring lock: {Name:mka6ff496b257d0157aa179323c77a165d878290 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:13:42.009333  475820 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1019 13:13:42.009563  475820 cache.go:107] acquiring lock: {Name:mk900cbbfae137b259a6d045a5e954905ebc4ab7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:13:42.009648  475820 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1019 13:13:42.009770  475820 cache.go:107] acquiring lock: {Name:mk8151f8aaf53ecf9ac26af60dbb866094ee01c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:13:42.009842  475820 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1019 13:13:42.009945  475820 cache.go:107] acquiring lock: {Name:mk588b1a76127636b20b5749ab1b86e294b230e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:13:42.010016  475820 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1019 13:13:42.016847  475820 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1019 13:13:42.017269  475820 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1019 13:13:42.017382  475820 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1019 13:13:42.017465  475820 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1019 13:13:42.017544  475820 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1019 13:13:42.017623  475820 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1019 13:13:42.017756  475820 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1019 13:13:42.039316  475820 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 13:13:42.039341  475820 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 13:13:42.039360  475820 cache.go:232] Successfully downloaded all kic artifacts
	I1019 13:13:42.039385  475820 start.go:360] acquireMachinesLock for no-preload-108149: {Name:mk1e7d61a5a88a341b3d8e7634b6c23c2df5dac5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:13:42.039511  475820 start.go:364] duration metric: took 104.19µs to acquireMachinesLock for "no-preload-108149"
	I1019 13:13:42.039553  475820 start.go:93] Provisioning new machine with config: &{Name:no-preload-108149 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-108149 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:13:42.039631  475820 start.go:125] createHost starting for "" (driver="docker")
	I1019 13:13:42.043166  475820 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 13:13:42.043471  475820 start.go:159] libmachine.API.Create for "no-preload-108149" (driver="docker")
	I1019 13:13:42.043521  475820 client.go:168] LocalClient.Create starting
	I1019 13:13:42.043595  475820 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem
	I1019 13:13:42.043634  475820 main.go:141] libmachine: Decoding PEM data...
	I1019 13:13:42.043653  475820 main.go:141] libmachine: Parsing certificate...
	I1019 13:13:42.043716  475820 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem
	I1019 13:13:42.043744  475820 main.go:141] libmachine: Decoding PEM data...
	I1019 13:13:42.043755  475820 main.go:141] libmachine: Parsing certificate...
	I1019 13:13:42.044158  475820 cli_runner.go:164] Run: docker network inspect no-preload-108149 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 13:13:42.067052  475820 cli_runner.go:211] docker network inspect no-preload-108149 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 13:13:42.067139  475820 network_create.go:284] running [docker network inspect no-preload-108149] to gather additional debugging logs...
	I1019 13:13:42.067163  475820 cli_runner.go:164] Run: docker network inspect no-preload-108149
	W1019 13:13:42.088388  475820 cli_runner.go:211] docker network inspect no-preload-108149 returned with exit code 1
	I1019 13:13:42.088422  475820 network_create.go:287] error running [docker network inspect no-preload-108149]: docker network inspect no-preload-108149: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-108149 not found
	I1019 13:13:42.088438  475820 network_create.go:289] output of [docker network inspect no-preload-108149]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-108149 not found
	
	** /stderr **
	I1019 13:13:42.088544  475820 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:13:42.110292  475820 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-319c97358c5c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2a:99:c3:44:12:51} reservation:<nil>}
	I1019 13:13:42.110684  475820 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5c09b33e0936 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:93:4b:f6:fd:1c} reservation:<nil>}
	I1019 13:13:42.111126  475820 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2c2bbaadd4a8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:8f:96:27:48:2c} reservation:<nil>}
	I1019 13:13:42.111773  475820 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ba9b10}
	I1019 13:13:42.111806  475820 network_create.go:124] attempt to create docker network no-preload-108149 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1019 13:13:42.111871  475820 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-108149 no-preload-108149
	I1019 13:13:42.201412  475820 network_create.go:108] docker network no-preload-108149 192.168.76.0/24 created
	I1019 13:13:42.201452  475820 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-108149" container
	I1019 13:13:42.201550  475820 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 13:13:42.230264  475820 cli_runner.go:164] Run: docker volume create no-preload-108149 --label name.minikube.sigs.k8s.io=no-preload-108149 --label created_by.minikube.sigs.k8s.io=true
	I1019 13:13:42.253562  475820 oci.go:103] Successfully created a docker volume no-preload-108149
	I1019 13:13:42.253798  475820 cli_runner.go:164] Run: docker run --rm --name no-preload-108149-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-108149 --entrypoint /usr/bin/test -v no-preload-108149:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 13:13:42.338195  475820 cache.go:162] opening:  /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1019 13:13:42.359826  475820 cache.go:162] opening:  /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1019 13:13:42.380526  475820 cache.go:162] opening:  /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1019 13:13:42.396547  475820 cache.go:157] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1019 13:13:42.396616  475820 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 387.064008ms
	I1019 13:13:42.396643  475820 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1019 13:13:42.401593  475820 cache.go:162] opening:  /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1019 13:13:42.404894  475820 cache.go:162] opening:  /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1019 13:13:42.413770  475820 cache.go:162] opening:  /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1019 13:13:42.415174  475820 cache.go:162] opening:  /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1019 13:13:42.731650  475820 cache.go:157] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1019 13:13:42.731683  475820 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 722.414706ms
	I1019 13:13:42.731703  475820 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1019 13:13:42.879647  475820 oci.go:107] Successfully prepared a docker volume no-preload-108149
	I1019 13:13:42.879681  475820 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1019 13:13:42.879805  475820 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 13:13:42.879912  475820 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 13:13:42.948716  475820 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-108149 --name no-preload-108149 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-108149 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-108149 --network no-preload-108149 --ip 192.168.76.2 --volume no-preload-108149:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 13:13:43.312480  475820 cache.go:157] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1019 13:13:43.312518  475820 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.303417862s
	I1019 13:13:43.312532  475820 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1019 13:13:43.368862  475820 cli_runner.go:164] Run: docker container inspect no-preload-108149 --format={{.State.Running}}
	I1019 13:13:43.424100  475820 cache.go:157] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1019 13:13:43.424185  475820 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.415256827s
	I1019 13:13:43.424214  475820 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1019 13:13:43.429888  475820 cache.go:157] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1019 13:13:43.430523  475820 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.420572426s
	I1019 13:13:43.430571  475820 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1019 13:13:43.435523  475820 cli_runner.go:164] Run: docker container inspect no-preload-108149 --format={{.State.Status}}
	I1019 13:13:43.459354  475820 cli_runner.go:164] Run: docker exec no-preload-108149 stat /var/lib/dpkg/alternatives/iptables
	I1019 13:13:43.541399  475820 cache.go:157] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1019 13:13:43.541439  475820 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.53283653s
	I1019 13:13:43.541452  475820 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1019 13:13:43.550963  475820 oci.go:144] the created container "no-preload-108149" has a running status.
	I1019 13:13:43.550988  475820 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/no-preload-108149/id_rsa...
	I1019 13:13:44.794808  475820 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-292654/.minikube/machines/no-preload-108149/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 13:13:44.817498  475820 cli_runner.go:164] Run: docker container inspect no-preload-108149 --format={{.State.Status}}
	I1019 13:13:44.849327  475820 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 13:13:44.849357  475820 kic_runner.go:114] Args: [docker exec --privileged no-preload-108149 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 13:13:44.912821  475820 cli_runner.go:164] Run: docker container inspect no-preload-108149 --format={{.State.Status}}
	I1019 13:13:44.931121  475820 machine.go:93] provisionDockerMachine start ...
	I1019 13:13:44.931234  475820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:13:44.962145  475820 main.go:141] libmachine: Using SSH client type: native
	I1019 13:13:44.962473  475820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1019 13:13:44.962491  475820 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 13:13:44.996036  475820 cache.go:157] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1019 13:13:44.996122  475820 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.986353978s
	I1019 13:13:44.996150  475820 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1019 13:13:44.996197  475820 cache.go:87] Successfully saved all images to host disk.
	I1019 13:13:45.179465  475820 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-108149
	
	I1019 13:13:45.179504  475820 ubuntu.go:182] provisioning hostname "no-preload-108149"
	I1019 13:13:45.179587  475820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:13:45.204094  475820 main.go:141] libmachine: Using SSH client type: native
	I1019 13:13:45.204468  475820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1019 13:13:45.204491  475820 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-108149 && echo "no-preload-108149" | sudo tee /etc/hostname
	I1019 13:13:45.392461  475820 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-108149
	
	I1019 13:13:45.392541  475820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:13:45.415309  475820 main.go:141] libmachine: Using SSH client type: native
	I1019 13:13:45.415704  475820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1019 13:13:45.415742  475820 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-108149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-108149/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-108149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 13:13:45.562025  475820 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 13:13:45.562055  475820 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 13:13:45.562085  475820 ubuntu.go:190] setting up certificates
	I1019 13:13:45.562095  475820 provision.go:84] configureAuth start
	I1019 13:13:45.562154  475820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-108149
	I1019 13:13:45.579867  475820 provision.go:143] copyHostCerts
	I1019 13:13:45.579944  475820 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem, removing ...
	I1019 13:13:45.579962  475820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem
	I1019 13:13:45.580044  475820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 13:13:45.580136  475820 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem, removing ...
	I1019 13:13:45.580144  475820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem
	I1019 13:13:45.580190  475820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 13:13:45.580294  475820 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem, removing ...
	I1019 13:13:45.580307  475820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem
	I1019 13:13:45.580334  475820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 13:13:45.580382  475820 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.no-preload-108149 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-108149]
	I1019 13:13:46.107464  475820 provision.go:177] copyRemoteCerts
	I1019 13:13:46.107535  475820 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 13:13:46.107576  475820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:13:46.127114  475820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/no-preload-108149/id_rsa Username:docker}
	I1019 13:13:46.233454  475820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 13:13:46.251919  475820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 13:13:46.269605  475820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 13:13:46.287637  475820 provision.go:87] duration metric: took 725.527711ms to configureAuth
	I1019 13:13:46.287707  475820 ubuntu.go:206] setting minikube options for container-runtime
	I1019 13:13:46.287921  475820 config.go:182] Loaded profile config "no-preload-108149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:13:46.288031  475820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:13:46.305916  475820 main.go:141] libmachine: Using SSH client type: native
	I1019 13:13:46.306253  475820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1019 13:13:46.306275  475820 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 13:13:46.568546  475820 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 13:13:46.568569  475820 machine.go:96] duration metric: took 1.637424066s to provisionDockerMachine
	I1019 13:13:46.568578  475820 client.go:171] duration metric: took 4.525048332s to LocalClient.Create
	I1019 13:13:46.568593  475820 start.go:167] duration metric: took 4.525123041s to libmachine.API.Create "no-preload-108149"
	I1019 13:13:46.568599  475820 start.go:293] postStartSetup for "no-preload-108149" (driver="docker")
	I1019 13:13:46.568610  475820 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 13:13:46.568694  475820 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 13:13:46.568735  475820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:13:46.587225  475820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/no-preload-108149/id_rsa Username:docker}
	I1019 13:13:46.694302  475820 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 13:13:46.697635  475820 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 13:13:46.697665  475820 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 13:13:46.697705  475820 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/addons for local assets ...
	I1019 13:13:46.697764  475820 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/files for local assets ...
	I1019 13:13:46.697856  475820 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem -> 2945182.pem in /etc/ssl/certs
	I1019 13:13:46.697961  475820 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 13:13:46.705442  475820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:13:46.723982  475820 start.go:296] duration metric: took 155.368044ms for postStartSetup
	I1019 13:13:46.724417  475820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-108149
	I1019 13:13:46.742093  475820 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/config.json ...
	I1019 13:13:46.742776  475820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 13:13:46.742846  475820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:13:46.765577  475820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/no-preload-108149/id_rsa Username:docker}
	I1019 13:13:46.866624  475820 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 13:13:46.877346  475820 start.go:128] duration metric: took 4.837699818s to createHost
	I1019 13:13:46.877368  475820 start.go:83] releasing machines lock for "no-preload-108149", held for 4.837842334s
	I1019 13:13:46.877452  475820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-108149
	I1019 13:13:46.894797  475820 ssh_runner.go:195] Run: cat /version.json
	I1019 13:13:46.894852  475820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:13:46.894873  475820 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 13:13:46.894935  475820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:13:46.920214  475820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/no-preload-108149/id_rsa Username:docker}
	I1019 13:13:46.921091  475820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/no-preload-108149/id_rsa Username:docker}
	I1019 13:13:47.021596  475820 ssh_runner.go:195] Run: systemctl --version
	I1019 13:13:47.121829  475820 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 13:13:47.159339  475820 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 13:13:47.163647  475820 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 13:13:47.163719  475820 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 13:13:47.192330  475820 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 13:13:47.192350  475820 start.go:495] detecting cgroup driver to use...
	I1019 13:13:47.192384  475820 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 13:13:47.192433  475820 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 13:13:47.211479  475820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 13:13:47.224583  475820 docker.go:218] disabling cri-docker service (if available) ...
	I1019 13:13:47.224646  475820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 13:13:47.242364  475820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 13:13:47.261567  475820 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 13:13:47.381316  475820 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 13:13:47.499587  475820 docker.go:234] disabling docker service ...
	I1019 13:13:47.499655  475820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 13:13:47.526482  475820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 13:13:47.540705  475820 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 13:13:47.703137  475820 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 13:13:47.891027  475820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 13:13:47.906543  475820 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 13:13:47.926609  475820 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 13:13:47.926683  475820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:13:47.939064  475820 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 13:13:47.939141  475820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:13:47.950369  475820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:13:47.963920  475820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:13:47.974473  475820 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 13:13:47.983502  475820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:13:47.994062  475820 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:13:48.012921  475820 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:13:48.023310  475820 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 13:13:48.032734  475820 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 13:13:48.042461  475820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:13:48.189152  475820 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 13:13:48.365186  475820 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 13:13:48.365245  475820 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 13:13:48.370157  475820 start.go:563] Will wait 60s for crictl version
	I1019 13:13:48.370224  475820 ssh_runner.go:195] Run: which crictl
	I1019 13:13:48.375209  475820 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 13:13:48.407490  475820 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 13:13:48.407601  475820 ssh_runner.go:195] Run: crio --version
	I1019 13:13:48.448280  475820 ssh_runner.go:195] Run: crio --version
	I1019 13:13:48.494480  475820 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 19 13:13:35 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:35.084062801Z" level=info msg="Created container 1bb17d0599924ca6aa323eff6c3834ac7f50fbadf8425f62097965e11adb956a: kube-system/coredns-5dd5756b68-5mdz7/coredns" id=94be2c67-81e1-4780-ac47-c60d9453ad1c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:13:35 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:35.087762082Z" level=info msg="Starting container: 1bb17d0599924ca6aa323eff6c3834ac7f50fbadf8425f62097965e11adb956a" id=56bbcbd0-693d-45dc-a56c-3756da2005d5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:13:35 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:35.090916791Z" level=info msg="Started container" PID=1941 containerID=1bb17d0599924ca6aa323eff6c3834ac7f50fbadf8425f62097965e11adb956a description=kube-system/coredns-5dd5756b68-5mdz7/coredns id=56bbcbd0-693d-45dc-a56c-3756da2005d5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=621bc6cb6653739c6e02aaf9557ae74f640b0174cb56e0e9397ffd4e7275346a
	Oct 19 13:13:38 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:38.835822425Z" level=info msg="Running pod sandbox: default/busybox/POD" id=45848032-b2b6-40a6-8c5a-ff1c60e8f959 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:13:38 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:38.83590259Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:13:38 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:38.85037095Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:86ec3b26d1007bbe65c46d3ba7d1d7389286e4c670f53d4d682f1d4001330a3f UID:a8b3e381-a2c1-49ea-a27d-b299c312c182 NetNS:/var/run/netns/2d4bcc60-e57f-414b-9807-4a7c3d3d6350 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079880}] Aliases:map[]}"
	Oct 19 13:13:38 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:38.850407668Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 19 13:13:38 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:38.8653535Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:86ec3b26d1007bbe65c46d3ba7d1d7389286e4c670f53d4d682f1d4001330a3f UID:a8b3e381-a2c1-49ea-a27d-b299c312c182 NetNS:/var/run/netns/2d4bcc60-e57f-414b-9807-4a7c3d3d6350 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079880}] Aliases:map[]}"
	Oct 19 13:13:38 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:38.865494613Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 19 13:13:38 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:38.876932541Z" level=info msg="Ran pod sandbox 86ec3b26d1007bbe65c46d3ba7d1d7389286e4c670f53d4d682f1d4001330a3f with infra container: default/busybox/POD" id=45848032-b2b6-40a6-8c5a-ff1c60e8f959 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:13:38 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:38.882871344Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=86e212ae-e2b0-43fc-8f0e-def808846386 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:13:38 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:38.883220149Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=86e212ae-e2b0-43fc-8f0e-def808846386 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:13:38 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:38.885832598Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=86e212ae-e2b0-43fc-8f0e-def808846386 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:13:38 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:38.89267529Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4a20df97-05b8-4f18-98ad-9d2dc06158c5 name=/runtime.v1.ImageService/PullImage
	Oct 19 13:13:38 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:38.89615253Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 13:13:40 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:40.86306937Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=4a20df97-05b8-4f18-98ad-9d2dc06158c5 name=/runtime.v1.ImageService/PullImage
	Oct 19 13:13:40 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:40.864041264Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=923c02af-4278-417e-88c5-a2634c4647eb name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:13:40 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:40.865632751Z" level=info msg="Creating container: default/busybox/busybox" id=091f43f1-3a01-42ef-aaad-4c0fd7fc52f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:13:40 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:40.866327308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:13:40 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:40.871827648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:13:40 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:40.872282169Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:13:40 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:40.890714512Z" level=info msg="Created container ef250f1b281d62c207d390a0c8ab3f4631e93eb10c9c5091b54de14b57a3c5b5: default/busybox/busybox" id=091f43f1-3a01-42ef-aaad-4c0fd7fc52f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:13:40 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:40.899690959Z" level=info msg="Starting container: ef250f1b281d62c207d390a0c8ab3f4631e93eb10c9c5091b54de14b57a3c5b5" id=132e60f4-9927-425c-a892-d0e553623437 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:13:40 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:40.903702138Z" level=info msg="Started container" PID=1995 containerID=ef250f1b281d62c207d390a0c8ab3f4631e93eb10c9c5091b54de14b57a3c5b5 description=default/busybox/busybox id=132e60f4-9927-425c-a892-d0e553623437 name=/runtime.v1.RuntimeService/StartContainer sandboxID=86ec3b26d1007bbe65c46d3ba7d1d7389286e4c670f53d4d682f1d4001330a3f
	Oct 19 13:13:47 old-k8s-version-842494 crio[836]: time="2025-10-19T13:13:47.73784501Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	ef250f1b281d6       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   86ec3b26d1007       busybox                                          default
	1bb17d0599924       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      14 seconds ago      Running             coredns                   0                   621bc6cb66537       coredns-5dd5756b68-5mdz7                         kube-system
	4fadd59900d85       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      16 seconds ago      Running             storage-provisioner       0                   d5eb884e94aef       storage-provisioner                              kube-system
	b6a0af32cabaf       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    26 seconds ago      Running             kindnet-cni               0                   2866699916eab       kindnet-7lwtw                                    kube-system
	8c5be96e4cd6e       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      30 seconds ago      Running             kube-proxy                0                   b164e2e73ebaf       kube-proxy-v7wq7                                 kube-system
	5cb36556d4d1a       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      52 seconds ago      Running             etcd                      0                   9573dcf0c5d2c       etcd-old-k8s-version-842494                      kube-system
	e9b06d47d0066       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      52 seconds ago      Running             kube-scheduler            0                   73ec169d1b4a0       kube-scheduler-old-k8s-version-842494            kube-system
	73c8e599e7e8b       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      52 seconds ago      Running             kube-apiserver            0                   a89bf3bd18032       kube-apiserver-old-k8s-version-842494            kube-system
	025822be57d73       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      52 seconds ago      Running             kube-controller-manager   0                   20203c68cae21       kube-controller-manager-old-k8s-version-842494   kube-system
	
	
	==> coredns [1bb17d0599924ca6aa323eff6c3834ac7f50fbadf8425f62097965e11adb956a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48329 - 12263 "HINFO IN 9043734676361943334.5829023110743292382. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017271475s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-842494
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-842494
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=old-k8s-version-842494
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T13_13_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 13:13:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-842494
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 13:13:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 13:13:35 +0000   Sun, 19 Oct 2025 13:12:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 13:13:35 +0000   Sun, 19 Oct 2025 13:12:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 13:13:35 +0000   Sun, 19 Oct 2025 13:12:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 13:13:35 +0000   Sun, 19 Oct 2025 13:13:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-842494
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                ff91876e-8bed-4e46-9175-4f587101f24f
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-5mdz7                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     31s
	  kube-system                 etcd-old-k8s-version-842494                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         44s
	  kube-system                 kindnet-7lwtw                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-old-k8s-version-842494             250m (12%)    0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-controller-manager-old-k8s-version-842494    200m (10%)    0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-proxy-v7wq7                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-old-k8s-version-842494             100m (5%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 30s                kube-proxy       
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)  kubelet          Node old-k8s-version-842494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)  kubelet          Node old-k8s-version-842494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)  kubelet          Node old-k8s-version-842494 status is now: NodeHasSufficientPID
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  44s                kubelet          Node old-k8s-version-842494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s                kubelet          Node old-k8s-version-842494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s                kubelet          Node old-k8s-version-842494 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s                node-controller  Node old-k8s-version-842494 event: Registered Node old-k8s-version-842494 in Controller
	  Normal  NodeReady                16s                kubelet          Node old-k8s-version-842494 status is now: NodeReady
	
	
	==> dmesg <==
	[ +21.526979] overlayfs: idmapped layers are currently not supported
	[Oct19 12:46] overlayfs: idmapped layers are currently not supported
	[Oct19 12:50] overlayfs: idmapped layers are currently not supported
	[Oct19 12:51] overlayfs: idmapped layers are currently not supported
	[Oct19 12:52] overlayfs: idmapped layers are currently not supported
	[Oct19 12:53] overlayfs: idmapped layers are currently not supported
	[Oct19 12:54] overlayfs: idmapped layers are currently not supported
	[Oct19 12:56] overlayfs: idmapped layers are currently not supported
	[ +16.315179] overlayfs: idmapped layers are currently not supported
	[ +11.914063] overlayfs: idmapped layers are currently not supported
	[Oct19 12:57] overlayfs: idmapped layers are currently not supported
	[Oct19 12:58] overlayfs: idmapped layers are currently not supported
	[ +48.481184] overlayfs: idmapped layers are currently not supported
	[Oct19 12:59] overlayfs: idmapped layers are currently not supported
	[Oct19 13:00] overlayfs: idmapped layers are currently not supported
	[Oct19 13:01] overlayfs: idmapped layers are currently not supported
	[Oct19 13:04] overlayfs: idmapped layers are currently not supported
	[Oct19 13:05] overlayfs: idmapped layers are currently not supported
	[Oct19 13:06] overlayfs: idmapped layers are currently not supported
	[Oct19 13:08] overlayfs: idmapped layers are currently not supported
	[ +38.759554] overlayfs: idmapped layers are currently not supported
	[Oct19 13:10] overlayfs: idmapped layers are currently not supported
	[Oct19 13:11] overlayfs: idmapped layers are currently not supported
	[Oct19 13:12] overlayfs: idmapped layers are currently not supported
	[ +39.991818] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5cb36556d4d1a8cad00edc3a2ed9cf26c5bb85bbedd4a77673ed20d20e10ac90] <==
	{"level":"info","ts":"2025-10-19T13:12:57.160258Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T13:12:57.155199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-19T13:12:57.160563Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-19T13:12:57.15523Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-19T13:12:57.160722Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-19T13:12:57.161558Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-19T13:12:57.161749Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-19T13:12:57.813579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-19T13:12:57.813695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-19T13:12:57.813737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-10-19T13:12:57.813775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-10-19T13:12:57.81381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-19T13:12:57.813848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-19T13:12:57.813883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-19T13:12:57.816013Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T13:12:57.817454Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-842494 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-19T13:12:57.817526Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T13:12:57.819446Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T13:12:57.819567Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T13:12:57.819618Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T13:12:57.820148Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-19T13:12:57.827707Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T13:12:57.849727Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-19T13:12:57.849891Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-19T13:12:57.849926Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:13:50 up  2:56,  0 user,  load average: 4.63, 2.92, 2.57
	Linux old-k8s-version-842494 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b6a0af32cabafbbf494a5dc5f2f7468a0538173321034977d23a4992e30d996d] <==
	I1019 13:13:22.808431       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:13:22.809505       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 13:13:22.809636       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:13:22.809718       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:13:22.809738       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:13:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:13:23.008392       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:13:23.008478       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:13:23.008551       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:13:23.009485       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 13:13:23.208942       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 13:13:23.209063       1 metrics.go:72] Registering metrics
	I1019 13:13:23.209280       1 controller.go:711] "Syncing nftables rules"
	I1019 13:13:33.014676       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 13:13:33.014732       1 main.go:301] handling current node
	I1019 13:13:43.007988       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 13:13:43.008097       1 main.go:301] handling current node
	
	
	==> kube-apiserver [73c8e599e7e8be0505f72c8c4e3c27d694502c6f42a84805a71afac1ee0c3d0b] <==
	I1019 13:13:01.712268       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1019 13:13:01.712526       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1019 13:13:01.713158       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1019 13:13:01.713343       1 aggregator.go:166] initial CRD sync complete...
	I1019 13:13:01.713380       1 autoregister_controller.go:141] Starting autoregister controller
	I1019 13:13:01.713407       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 13:13:01.713437       1 cache.go:39] Caches are synced for autoregister controller
	I1019 13:13:01.715511       1 controller.go:624] quota admission added evaluator for: namespaces
	I1019 13:13:01.719268       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1019 13:13:01.765650       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 13:13:02.414428       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 13:13:02.421051       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 13:13:02.421075       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 13:13:02.993585       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 13:13:03.054290       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 13:13:03.151782       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 13:13:03.159664       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1019 13:13:03.160810       1 controller.go:624] quota admission added evaluator for: endpoints
	I1019 13:13:03.165965       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 13:13:03.651888       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1019 13:13:04.893863       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1019 13:13:04.907543       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 13:13:04.920599       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1019 13:13:18.382329       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1019 13:13:18.592216       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [025822be57d7311621d4c335c23f300c1114804c021fb477c5dfaea1e58f3716] <==
	I1019 13:13:17.829766       1 shared_informer.go:318] Caches are synced for resource quota
	I1019 13:13:17.829871       1 shared_informer.go:318] Caches are synced for resource quota
	I1019 13:13:17.923778       1 shared_informer.go:318] Caches are synced for attach detach
	I1019 13:13:18.257324       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 13:13:18.257357       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1019 13:13:18.258848       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 13:13:18.414181       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1019 13:13:18.659915       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-v7wq7"
	I1019 13:13:18.659941       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7lwtw"
	I1019 13:13:18.819360       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xfw5z"
	I1019 13:13:18.857556       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-5mdz7"
	I1019 13:13:18.884936       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="471.847933ms"
	I1019 13:13:18.911208       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="26.230116ms"
	I1019 13:13:18.911296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.419µs"
	I1019 13:13:19.461115       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1019 13:13:19.493236       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-xfw5z"
	I1019 13:13:19.504815       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="43.065218ms"
	I1019 13:13:19.517185       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.315601ms"
	I1019 13:13:19.517635       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.765µs"
	I1019 13:13:33.179320       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.364µs"
	I1019 13:13:33.210736       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="144.436µs"
	I1019 13:13:35.267428       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.498µs"
	I1019 13:13:36.344291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="37.135827ms"
	I1019 13:13:36.345262       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.43µs"
	I1019 13:13:37.709416       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [8c5be96e4cd6e8968abac3a11aa8f97d7d7fd7839232736902ff02ddca4e2516] <==
	I1019 13:13:19.237962       1 server_others.go:69] "Using iptables proxy"
	I1019 13:13:19.266196       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1019 13:13:19.308946       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:13:19.311229       1 server_others.go:152] "Using iptables Proxier"
	I1019 13:13:19.311259       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1019 13:13:19.311267       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1019 13:13:19.311300       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1019 13:13:19.311563       1 server.go:846] "Version info" version="v1.28.0"
	I1019 13:13:19.311574       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:13:19.312786       1 config.go:188] "Starting service config controller"
	I1019 13:13:19.312796       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1019 13:13:19.312811       1 config.go:97] "Starting endpoint slice config controller"
	I1019 13:13:19.312815       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1019 13:13:19.313209       1 config.go:315] "Starting node config controller"
	I1019 13:13:19.313216       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1019 13:13:19.413741       1 shared_informer.go:318] Caches are synced for node config
	I1019 13:13:19.413779       1 shared_informer.go:318] Caches are synced for service config
	I1019 13:13:19.413806       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e9b06d47d0066359f195f6a435f1fc0ee5ccd6731575f126c3917da2dd94a7fb] <==
	W1019 13:13:01.735219       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1019 13:13:01.735227       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1019 13:13:01.735262       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1019 13:13:01.735270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1019 13:13:01.735308       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1019 13:13:01.735316       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1019 13:13:01.739923       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1019 13:13:01.740002       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 13:13:01.740377       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1019 13:13:01.740444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1019 13:13:01.740539       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1019 13:13:01.740578       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1019 13:13:01.740668       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1019 13:13:01.741458       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1019 13:13:01.741755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1019 13:13:01.741808       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1019 13:13:01.741886       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1019 13:13:01.741923       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1019 13:13:01.742001       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1019 13:13:01.742045       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1019 13:13:02.608776       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1019 13:13:02.608904       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1019 13:13:02.708746       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1019 13:13:02.708875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1019 13:13:03.424101       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 19 13:13:18 old-k8s-version-842494 kubelet[1371]: I1019 13:13:18.785278    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/11b55bd3-7ea2-4af9-ab7e-13998f6917c5-kube-proxy\") pod \"kube-proxy-v7wq7\" (UID: \"11b55bd3-7ea2-4af9-ab7e-13998f6917c5\") " pod="kube-system/kube-proxy-v7wq7"
	Oct 19 13:13:18 old-k8s-version-842494 kubelet[1371]: I1019 13:13:18.785304    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31f7a27a-624a-440e-84d7-fc0904e489e0-xtables-lock\") pod \"kindnet-7lwtw\" (UID: \"31f7a27a-624a-440e-84d7-fc0904e489e0\") " pod="kube-system/kindnet-7lwtw"
	Oct 19 13:13:18 old-k8s-version-842494 kubelet[1371]: I1019 13:13:18.785345    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/31f7a27a-624a-440e-84d7-fc0904e489e0-cni-cfg\") pod \"kindnet-7lwtw\" (UID: \"31f7a27a-624a-440e-84d7-fc0904e489e0\") " pod="kube-system/kindnet-7lwtw"
	Oct 19 13:13:18 old-k8s-version-842494 kubelet[1371]: I1019 13:13:18.785374    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54msj\" (UniqueName: \"kubernetes.io/projected/11b55bd3-7ea2-4af9-ab7e-13998f6917c5-kube-api-access-54msj\") pod \"kube-proxy-v7wq7\" (UID: \"11b55bd3-7ea2-4af9-ab7e-13998f6917c5\") " pod="kube-system/kube-proxy-v7wq7"
	Oct 19 13:13:18 old-k8s-version-842494 kubelet[1371]: I1019 13:13:18.785409    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg5sv\" (UniqueName: \"kubernetes.io/projected/31f7a27a-624a-440e-84d7-fc0904e489e0-kube-api-access-zg5sv\") pod \"kindnet-7lwtw\" (UID: \"31f7a27a-624a-440e-84d7-fc0904e489e0\") " pod="kube-system/kindnet-7lwtw"
	Oct 19 13:13:19 old-k8s-version-842494 kubelet[1371]: W1019 13:13:19.062902    1371 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd/crio-b164e2e73ebafceb15766c2df8a2be46249938425ce18430521f4aac59cb3610 WatchSource:0}: Error finding container b164e2e73ebafceb15766c2df8a2be46249938425ce18430521f4aac59cb3610: Status 404 returned error can't find the container with id b164e2e73ebafceb15766c2df8a2be46249938425ce18430521f4aac59cb3610
	Oct 19 13:13:23 old-k8s-version-842494 kubelet[1371]: I1019 13:13:23.202792    1371 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-v7wq7" podStartSLOduration=5.202748193 podCreationTimestamp="2025-10-19 13:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:13:20.206173049 +0000 UTC m=+15.359409226" watchObservedRunningTime="2025-10-19 13:13:23.202748193 +0000 UTC m=+18.355984344"
	Oct 19 13:13:24 old-k8s-version-842494 kubelet[1371]: I1019 13:13:24.998728    1371 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-7lwtw" podStartSLOduration=3.433863485 podCreationTimestamp="2025-10-19 13:13:18 +0000 UTC" firstStartedPulling="2025-10-19 13:13:19.050472227 +0000 UTC m=+14.203708379" lastFinishedPulling="2025-10-19 13:13:22.615294074 +0000 UTC m=+17.768530226" observedRunningTime="2025-10-19 13:13:23.203540778 +0000 UTC m=+18.356776938" watchObservedRunningTime="2025-10-19 13:13:24.998685332 +0000 UTC m=+20.151921492"
	Oct 19 13:13:33 old-k8s-version-842494 kubelet[1371]: I1019 13:13:33.124134    1371 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 19 13:13:33 old-k8s-version-842494 kubelet[1371]: I1019 13:13:33.176294    1371 topology_manager.go:215] "Topology Admit Handler" podUID="ca5b3ce0-02bc-47cc-b0d7-22b5c87208b0" podNamespace="kube-system" podName="coredns-5dd5756b68-5mdz7"
	Oct 19 13:13:33 old-k8s-version-842494 kubelet[1371]: I1019 13:13:33.178278    1371 topology_manager.go:215] "Topology Admit Handler" podUID="3d912c2a-b19f-4951-993f-64c474ba1b27" podNamespace="kube-system" podName="storage-provisioner"
	Oct 19 13:13:33 old-k8s-version-842494 kubelet[1371]: W1019 13:13:33.185287    1371 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-842494" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-842494' and this object
	Oct 19 13:13:33 old-k8s-version-842494 kubelet[1371]: E1019 13:13:33.185466    1371 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-842494" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-842494' and this object
	Oct 19 13:13:33 old-k8s-version-842494 kubelet[1371]: I1019 13:13:33.231824    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca5b3ce0-02bc-47cc-b0d7-22b5c87208b0-config-volume\") pod \"coredns-5dd5756b68-5mdz7\" (UID: \"ca5b3ce0-02bc-47cc-b0d7-22b5c87208b0\") " pod="kube-system/coredns-5dd5756b68-5mdz7"
	Oct 19 13:13:33 old-k8s-version-842494 kubelet[1371]: I1019 13:13:33.232009    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3d912c2a-b19f-4951-993f-64c474ba1b27-tmp\") pod \"storage-provisioner\" (UID: \"3d912c2a-b19f-4951-993f-64c474ba1b27\") " pod="kube-system/storage-provisioner"
	Oct 19 13:13:33 old-k8s-version-842494 kubelet[1371]: I1019 13:13:33.232102    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpzsv\" (UniqueName: \"kubernetes.io/projected/ca5b3ce0-02bc-47cc-b0d7-22b5c87208b0-kube-api-access-rpzsv\") pod \"coredns-5dd5756b68-5mdz7\" (UID: \"ca5b3ce0-02bc-47cc-b0d7-22b5c87208b0\") " pod="kube-system/coredns-5dd5756b68-5mdz7"
	Oct 19 13:13:33 old-k8s-version-842494 kubelet[1371]: I1019 13:13:33.232217    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wctk2\" (UniqueName: \"kubernetes.io/projected/3d912c2a-b19f-4951-993f-64c474ba1b27-kube-api-access-wctk2\") pod \"storage-provisioner\" (UID: \"3d912c2a-b19f-4951-993f-64c474ba1b27\") " pod="kube-system/storage-provisioner"
	Oct 19 13:13:34 old-k8s-version-842494 kubelet[1371]: E1019 13:13:34.340595    1371 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Oct 19 13:13:34 old-k8s-version-842494 kubelet[1371]: E1019 13:13:34.341111    1371 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ca5b3ce0-02bc-47cc-b0d7-22b5c87208b0-config-volume podName:ca5b3ce0-02bc-47cc-b0d7-22b5c87208b0 nodeName:}" failed. No retries permitted until 2025-10-19 13:13:34.841084287 +0000 UTC m=+29.994320439 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ca5b3ce0-02bc-47cc-b0d7-22b5c87208b0-config-volume") pod "coredns-5dd5756b68-5mdz7" (UID: "ca5b3ce0-02bc-47cc-b0d7-22b5c87208b0") : failed to sync configmap cache: timed out waiting for the condition
	Oct 19 13:13:35 old-k8s-version-842494 kubelet[1371]: W1019 13:13:35.022136    1371 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd/crio-621bc6cb6653739c6e02aaf9557ae74f640b0174cb56e0e9397ffd4e7275346a WatchSource:0}: Error finding container 621bc6cb6653739c6e02aaf9557ae74f640b0174cb56e0e9397ffd4e7275346a: Status 404 returned error can't find the container with id 621bc6cb6653739c6e02aaf9557ae74f640b0174cb56e0e9397ffd4e7275346a
	Oct 19 13:13:35 old-k8s-version-842494 kubelet[1371]: I1019 13:13:35.264679    1371 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.264636665 podCreationTimestamp="2025-10-19 13:13:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:13:34.259057657 +0000 UTC m=+29.412293825" watchObservedRunningTime="2025-10-19 13:13:35.264636665 +0000 UTC m=+30.417872817"
	Oct 19 13:13:36 old-k8s-version-842494 kubelet[1371]: I1019 13:13:36.296924    1371 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-5mdz7" podStartSLOduration=18.29675979 podCreationTimestamp="2025-10-19 13:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:13:35.266565487 +0000 UTC m=+30.419801647" watchObservedRunningTime="2025-10-19 13:13:36.29675979 +0000 UTC m=+31.449995950"
	Oct 19 13:13:38 old-k8s-version-842494 kubelet[1371]: I1019 13:13:38.534039    1371 topology_manager.go:215] "Topology Admit Handler" podUID="a8b3e381-a2c1-49ea-a27d-b299c312c182" podNamespace="default" podName="busybox"
	Oct 19 13:13:38 old-k8s-version-842494 kubelet[1371]: I1019 13:13:38.577979    1371 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5dkh\" (UniqueName: \"kubernetes.io/projected/a8b3e381-a2c1-49ea-a27d-b299c312c182-kube-api-access-h5dkh\") pod \"busybox\" (UID: \"a8b3e381-a2c1-49ea-a27d-b299c312c182\") " pod="default/busybox"
	Oct 19 13:13:38 old-k8s-version-842494 kubelet[1371]: W1019 13:13:38.873858    1371 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd/crio-86ec3b26d1007bbe65c46d3ba7d1d7389286e4c670f53d4d682f1d4001330a3f WatchSource:0}: Error finding container 86ec3b26d1007bbe65c46d3ba7d1d7389286e4c670f53d4d682f1d4001330a3f: Status 404 returned error can't find the container with id 86ec3b26d1007bbe65c46d3ba7d1d7389286e4c670f53d4d682f1d4001330a3f
	
	
	==> storage-provisioner [4fadd59900d85697ae86b897b32c714cc23e109ca2ac9cf488a3bbd39462653c] <==
	I1019 13:13:33.585741       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 13:13:33.607583       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 13:13:33.609665       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1019 13:13:33.629471       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 13:13:33.629778       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-842494_cc42b366-cec0-43ad-8cea-33216863d941!
	I1019 13:13:33.633487       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba2428d6-3741-40ad-80da-985be3fb4b28", APIVersion:"v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-842494_cc42b366-cec0-43ad-8cea-33216863d941 became leader
	I1019 13:13:33.732087       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-842494_cc42b366-cec0-43ad-8cea-33216863d941!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-842494 -n old-k8s-version-842494
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-842494 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-108149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-108149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (403.052944ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:15:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-108149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-108149 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-108149 describe deploy/metrics-server -n kube-system: exit status 1 (105.738409ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-108149 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-108149
helpers_test.go:243: (dbg) docker inspect no-preload-108149:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0",
	        "Created": "2025-10-19T13:13:42.966864471Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 476134,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T13:13:43.048458186Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0/hostname",
	        "HostsPath": "/var/lib/docker/containers/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0/hosts",
	        "LogPath": "/var/lib/docker/containers/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0-json.log",
	        "Name": "/no-preload-108149",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-108149:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-108149",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0",
	                "LowerDir": "/var/lib/docker/overlay2/ca33adf3602bb1f3e90dd2bca8f00da7d19763fa3c96fba2f19c6b9ace8c8b7b-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ca33adf3602bb1f3e90dd2bca8f00da7d19763fa3c96fba2f19c6b9ace8c8b7b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ca33adf3602bb1f3e90dd2bca8f00da7d19763fa3c96fba2f19c6b9ace8c8b7b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ca33adf3602bb1f3e90dd2bca8f00da7d19763fa3c96fba2f19c6b9ace8c8b7b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-108149",
	                "Source": "/var/lib/docker/volumes/no-preload-108149/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-108149",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-108149",
	                "name.minikube.sigs.k8s.io": "no-preload-108149",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "27165b6193a40fed343dea6d8fc33bc4791b8dcd07f5e187006aa0579e47e049",
	            "SandboxKey": "/var/run/docker/netns/27165b6193a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-108149": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:3c:53:8a:ff:ae",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "02fa40d5a7624754fb29434a70126850295cfdc9e5c6d2dc3c5e97dc6c14e8ed",
	                    "EndpointID": "627bbe07ca14d100d8eb1e988b5df9af493f91f0d20f4d98840cf2c94c18fd84",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-108149",
	                        "4857474c82b9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-108149 -n no-preload-108149
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-108149 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-108149 logs -n 25: (1.276373526s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-696007 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-696007             │ jenkins │ v1.37.0 │ 19 Oct 25 13:09 UTC │                     │
	│ ssh     │ -p cilium-696007 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-696007             │ jenkins │ v1.37.0 │ 19 Oct 25 13:09 UTC │                     │
	│ ssh     │ -p cilium-696007 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-696007             │ jenkins │ v1.37.0 │ 19 Oct 25 13:09 UTC │                     │
	│ ssh     │ -p cilium-696007 sudo crio config                                                                                                                                                                                                             │ cilium-696007             │ jenkins │ v1.37.0 │ 19 Oct 25 13:09 UTC │                     │
	│ delete  │ -p cilium-696007                                                                                                                                                                                                                              │ cilium-696007             │ jenkins │ v1.37.0 │ 19 Oct 25 13:09 UTC │ 19 Oct 25 13:09 UTC │
	│ start   │ -p cert-expiration-088393 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-088393    │ jenkins │ v1.37.0 │ 19 Oct 25 13:09 UTC │ 19 Oct 25 13:10 UTC │
	│ start   │ -p kubernetes-upgrade-104724 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-104724 │ jenkins │ v1.37.0 │ 19 Oct 25 13:10 UTC │                     │
	│ start   │ -p kubernetes-upgrade-104724 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-104724 │ jenkins │ v1.37.0 │ 19 Oct 25 13:10 UTC │ 19 Oct 25 13:11 UTC │
	│ delete  │ -p kubernetes-upgrade-104724                                                                                                                                                                                                                  │ kubernetes-upgrade-104724 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ start   │ -p force-systemd-flag-606072 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-606072 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ force-systemd-flag-606072 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-606072 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ delete  │ -p force-systemd-flag-606072                                                                                                                                                                                                                  │ force-systemd-flag-606072 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ start   │ -p cert-options-264135 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:12 UTC │
	│ ssh     │ cert-options-264135 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ ssh     │ -p cert-options-264135 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ delete  │ -p cert-options-264135                                                                                                                                                                                                                        │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ start   │ -p old-k8s-version-842494 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:13 UTC │
	│ start   │ -p cert-expiration-088393 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-088393    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:13 UTC │
	│ delete  │ -p cert-expiration-088393                                                                                                                                                                                                                     │ cert-expiration-088393    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:13 UTC │
	│ start   │ -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-842494 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │                     │
	│ stop    │ -p old-k8s-version-842494 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-842494 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:14 UTC │ 19 Oct 25 13:14 UTC │
	│ start   │ -p old-k8s-version-842494 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:14 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-108149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 13:14:06
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 13:14:06.165227  478871 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:14:06.165428  478871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:14:06.165440  478871 out.go:374] Setting ErrFile to fd 2...
	I1019 13:14:06.165445  478871 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:14:06.165704  478871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:14:06.166087  478871 out.go:368] Setting JSON to false
	I1019 13:14:06.166975  478871 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10597,"bootTime":1760869050,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 13:14:06.167043  478871 start.go:141] virtualization:  
	I1019 13:14:06.171977  478871 out.go:179] * [old-k8s-version-842494] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 13:14:06.179675  478871 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:14:06.179717  478871 notify.go:220] Checking for updates...
	I1019 13:14:06.184284  478871 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:14:06.189490  478871 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:14:06.193172  478871 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 13:14:06.195987  478871 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 13:14:06.198842  478871 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:14:06.205254  478871 config.go:182] Loaded profile config "old-k8s-version-842494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 13:14:06.208706  478871 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1019 13:14:06.215204  478871 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:14:06.255514  478871 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 13:14:06.255644  478871 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:14:06.338265  478871 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-19 13:14:06.329164481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:14:06.338375  478871 docker.go:318] overlay module found
	I1019 13:14:06.341471  478871 out.go:179] * Using the docker driver based on existing profile
	I1019 13:14:06.344214  478871 start.go:305] selected driver: docker
	I1019 13:14:06.344229  478871 start.go:925] validating driver "docker" against &{Name:old-k8s-version-842494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-842494 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:14:06.344330  478871 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:14:06.345021  478871 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:14:06.431633  478871 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-19 13:14:06.420794139 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:14:06.431945  478871 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:14:06.431972  478871 cni.go:84] Creating CNI manager for ""
	I1019 13:14:06.432031  478871 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:14:06.432064  478871 start.go:349] cluster config:
	{Name:old-k8s-version-842494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-842494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:14:06.435667  478871 out.go:179] * Starting "old-k8s-version-842494" primary control-plane node in "old-k8s-version-842494" cluster
	I1019 13:14:06.438654  478871 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 13:14:06.441629  478871 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 13:14:06.444657  478871 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 13:14:06.444718  478871 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1019 13:14:06.444727  478871 cache.go:58] Caching tarball of preloaded images
	I1019 13:14:06.444743  478871 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 13:14:06.444810  478871 preload.go:233] Found /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 13:14:06.444819  478871 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1019 13:14:06.444937  478871 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/old-k8s-version-842494/config.json ...
	I1019 13:14:06.465991  478871 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 13:14:06.466016  478871 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 13:14:06.466029  478871 cache.go:232] Successfully downloaded all kic artifacts
	I1019 13:14:06.466057  478871 start.go:360] acquireMachinesLock for old-k8s-version-842494: {Name:mk6b6350336af595b8b21d4aeaf23d79094ed2de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:14:06.466113  478871 start.go:364] duration metric: took 32.993µs to acquireMachinesLock for "old-k8s-version-842494"
	I1019 13:14:06.466136  478871 start.go:96] Skipping create...Using existing machine configuration
	I1019 13:14:06.466145  478871 fix.go:54] fixHost starting: 
	I1019 13:14:06.466403  478871 cli_runner.go:164] Run: docker container inspect old-k8s-version-842494 --format={{.State.Status}}
	I1019 13:14:06.487261  478871 fix.go:112] recreateIfNeeded on old-k8s-version-842494: state=Stopped err=<nil>
	W1019 13:14:06.487301  478871 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 13:14:02.142752  475820 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.647317931s)
	I1019 13:14:02.142780  475820 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1019 13:14:02.142799  475820 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1019 13:14:02.142845  475820 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1019 13:14:02.704094  475820 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1019 13:14:02.704129  475820 cache_images.go:124] Successfully loaded all cached images
	I1019 13:14:02.704136  475820 cache_images.go:93] duration metric: took 14.14364808s to LoadCachedImages
	I1019 13:14:02.704153  475820 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1019 13:14:02.704253  475820 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-108149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-108149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 13:14:02.704340  475820 ssh_runner.go:195] Run: crio config
	I1019 13:14:02.776635  475820 cni.go:84] Creating CNI manager for ""
	I1019 13:14:02.776660  475820 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:14:02.776679  475820 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 13:14:02.776709  475820 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-108149 NodeName:no-preload-108149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 13:14:02.776841  475820 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-108149"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 13:14:02.776920  475820 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 13:14:02.785435  475820 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1019 13:14:02.785496  475820 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1019 13:14:02.793026  475820 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1019 13:14:02.793118  475820 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1019 13:14:02.793263  475820 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1019 13:14:02.793650  475820 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1019 13:14:02.797674  475820 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1019 13:14:02.797833  475820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1019 13:14:03.557528  475820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:14:03.585499  475820 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1019 13:14:03.591670  475820 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1019 13:14:03.591799  475820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1019 13:14:03.969571  475820 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1019 13:14:03.985642  475820 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1019 13:14:03.985983  475820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1019 13:14:04.375582  475820 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 13:14:04.384210  475820 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 13:14:04.399394  475820 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 13:14:04.412237  475820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1019 13:14:04.426026  475820 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 13:14:04.430072  475820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:14:04.440595  475820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:14:04.551009  475820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:14:04.571611  475820 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149 for IP: 192.168.76.2
	I1019 13:14:04.571686  475820 certs.go:195] generating shared ca certs ...
	I1019 13:14:04.571725  475820 certs.go:227] acquiring lock for ca certs: {Name:mk8f2f1c683cf5104ef70f6f3d59bf8f6240d633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:14:04.571923  475820 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key
	I1019 13:14:04.572006  475820 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key
	I1019 13:14:04.572033  475820 certs.go:257] generating profile certs ...
	I1019 13:14:04.572121  475820 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.key
	I1019 13:14:04.572174  475820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.crt with IP's: []
	I1019 13:14:05.094303  475820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.crt ...
	I1019 13:14:05.094340  475820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.crt: {Name:mk2ecaca1ad84c620119152ad9444d74c8b99b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:14:05.094579  475820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.key ...
	I1019 13:14:05.094594  475820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.key: {Name:mkfd5c97f4633af3a594cd7a5c65c1044bf954d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:14:05.094693  475820 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/apiserver.key.0139c4ce
	I1019 13:14:05.094715  475820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/apiserver.crt.0139c4ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1019 13:14:05.442239  475820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/apiserver.crt.0139c4ce ...
	I1019 13:14:05.442273  475820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/apiserver.crt.0139c4ce: {Name:mk590fc794b83cf664f72fafa7c50b2c69e92064 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:14:05.442466  475820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/apiserver.key.0139c4ce ...
	I1019 13:14:05.442483  475820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/apiserver.key.0139c4ce: {Name:mk788e752a966d2d44cd6be912d101975c2963cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:14:05.442579  475820 certs.go:382] copying /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/apiserver.crt.0139c4ce -> /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/apiserver.crt
	I1019 13:14:05.442662  475820 certs.go:386] copying /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/apiserver.key.0139c4ce -> /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/apiserver.key
	I1019 13:14:05.442721  475820 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/proxy-client.key
	I1019 13:14:05.442739  475820 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/proxy-client.crt with IP's: []
	I1019 13:14:06.367314  475820 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/proxy-client.crt ...
	I1019 13:14:06.367345  475820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/proxy-client.crt: {Name:mk64c9a9f8685e9f3c2c80ed7727ee43ce002977 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:14:06.367525  475820 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/proxy-client.key ...
	I1019 13:14:06.367544  475820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/proxy-client.key: {Name:mk384de93116ab655d5a378cdcfcfa5c83d8a319 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:14:06.367728  475820 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem (1338 bytes)
	W1019 13:14:06.367773  475820 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518_empty.pem, impossibly tiny 0 bytes
	I1019 13:14:06.367788  475820 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 13:14:06.367815  475820 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem (1082 bytes)
	I1019 13:14:06.367841  475820 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem (1123 bytes)
	I1019 13:14:06.367865  475820 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem (1679 bytes)
	I1019 13:14:06.367911  475820 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:14:06.368469  475820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 13:14:06.393842  475820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 13:14:06.417005  475820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 13:14:06.438290  475820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 13:14:06.467120  475820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 13:14:06.488237  475820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 13:14:06.520823  475820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 13:14:06.547535  475820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 13:14:06.571146  475820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /usr/share/ca-certificates/2945182.pem (1708 bytes)
	I1019 13:14:06.592681  475820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 13:14:06.627593  475820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem --> /usr/share/ca-certificates/294518.pem (1338 bytes)
	I1019 13:14:06.648288  475820 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 13:14:06.667416  475820 ssh_runner.go:195] Run: openssl version
	I1019 13:14:06.674714  475820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2945182.pem && ln -fs /usr/share/ca-certificates/2945182.pem /etc/ssl/certs/2945182.pem"
	I1019 13:14:06.683819  475820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2945182.pem
	I1019 13:14:06.688226  475820 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:20 /usr/share/ca-certificates/2945182.pem
	I1019 13:14:06.688327  475820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2945182.pem
	I1019 13:14:06.735327  475820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2945182.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 13:14:06.743981  475820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 13:14:06.752249  475820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:14:06.758065  475820 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:14:06.758182  475820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:14:06.826859  475820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 13:14:06.840468  475820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294518.pem && ln -fs /usr/share/ca-certificates/294518.pem /etc/ssl/certs/294518.pem"
	I1019 13:14:06.862931  475820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294518.pem
	I1019 13:14:06.870302  475820 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:20 /usr/share/ca-certificates/294518.pem
	I1019 13:14:06.870370  475820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294518.pem
	I1019 13:14:06.929137  475820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294518.pem /etc/ssl/certs/51391683.0"
	I1019 13:14:06.943838  475820 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 13:14:06.954625  475820 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 13:14:06.954675  475820 kubeadm.go:400] StartCluster: {Name:no-preload-108149 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-108149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:14:06.954746  475820 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 13:14:06.954802  475820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 13:14:07.007826  475820 cri.go:89] found id: ""
	I1019 13:14:07.007893  475820 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 13:14:07.026789  475820 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 13:14:07.042059  475820 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1019 13:14:07.042123  475820 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 13:14:07.054998  475820 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 13:14:07.055017  475820 kubeadm.go:157] found existing configuration files:
	
	I1019 13:14:07.055068  475820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 13:14:07.068767  475820 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 13:14:07.068828  475820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 13:14:07.079934  475820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 13:14:07.093020  475820 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 13:14:07.093126  475820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 13:14:07.107774  475820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 13:14:07.120882  475820 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 13:14:07.120999  475820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 13:14:07.128769  475820 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 13:14:07.144466  475820 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 13:14:07.144582  475820 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 13:14:07.154214  475820 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 13:14:07.204797  475820 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1019 13:14:07.205331  475820 kubeadm.go:318] [preflight] Running pre-flight checks
	I1019 13:14:07.268246  475820 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1019 13:14:07.268329  475820 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 13:14:07.268370  475820 kubeadm.go:318] OS: Linux
	I1019 13:14:07.268429  475820 kubeadm.go:318] CGROUPS_CPU: enabled
	I1019 13:14:07.268484  475820 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1019 13:14:07.268538  475820 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1019 13:14:07.268593  475820 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1019 13:14:07.268647  475820 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1019 13:14:07.268701  475820 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1019 13:14:07.268752  475820 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1019 13:14:07.268803  475820 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1019 13:14:07.268853  475820 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1019 13:14:07.361797  475820 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 13:14:07.361933  475820 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 13:14:07.362045  475820 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 13:14:07.376733  475820 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 13:14:06.490707  478871 out.go:252] * Restarting existing docker container for "old-k8s-version-842494" ...
	I1019 13:14:06.490781  478871 cli_runner.go:164] Run: docker start old-k8s-version-842494
	I1019 13:14:06.796381  478871 cli_runner.go:164] Run: docker container inspect old-k8s-version-842494 --format={{.State.Status}}
	I1019 13:14:06.828606  478871 kic.go:430] container "old-k8s-version-842494" state is running.
	I1019 13:14:06.828998  478871 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-842494
	I1019 13:14:06.856587  478871 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/old-k8s-version-842494/config.json ...
	I1019 13:14:06.856818  478871 machine.go:93] provisionDockerMachine start ...
	I1019 13:14:06.856887  478871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-842494
	I1019 13:14:06.881306  478871 main.go:141] libmachine: Using SSH client type: native
	I1019 13:14:06.881661  478871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1019 13:14:06.881705  478871 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 13:14:06.884176  478871 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1019 13:14:10.059066  478871 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-842494
	
	I1019 13:14:10.059183  478871 ubuntu.go:182] provisioning hostname "old-k8s-version-842494"
	I1019 13:14:10.059318  478871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-842494
	I1019 13:14:10.086558  478871 main.go:141] libmachine: Using SSH client type: native
	I1019 13:14:10.086879  478871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1019 13:14:10.086891  478871 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-842494 && echo "old-k8s-version-842494" | sudo tee /etc/hostname
	I1019 13:14:10.252437  478871 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-842494
	
	I1019 13:14:10.252600  478871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-842494
	I1019 13:14:10.274732  478871 main.go:141] libmachine: Using SSH client type: native
	I1019 13:14:10.275031  478871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1019 13:14:10.275048  478871 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-842494' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-842494/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-842494' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 13:14:10.426243  478871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 13:14:10.426311  478871 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 13:14:10.426385  478871 ubuntu.go:190] setting up certificates
	I1019 13:14:10.426445  478871 provision.go:84] configureAuth start
	I1019 13:14:10.426533  478871 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-842494
	I1019 13:14:10.449421  478871 provision.go:143] copyHostCerts
	I1019 13:14:10.449483  478871 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem, removing ...
	I1019 13:14:10.449500  478871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem
	I1019 13:14:10.449574  478871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 13:14:10.449667  478871 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem, removing ...
	I1019 13:14:10.449672  478871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem
	I1019 13:14:10.449727  478871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 13:14:10.449785  478871 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem, removing ...
	I1019 13:14:10.449789  478871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem
	I1019 13:14:10.449813  478871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 13:14:10.449859  478871 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-842494 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-842494]
	I1019 13:14:10.735177  478871 provision.go:177] copyRemoteCerts
	I1019 13:14:10.735287  478871 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 13:14:10.735369  478871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-842494
	I1019 13:14:10.759380  478871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/old-k8s-version-842494/id_rsa Username:docker}
	I1019 13:14:10.862341  478871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 13:14:10.880964  478871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1019 13:14:10.899911  478871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 13:14:10.918971  478871 provision.go:87] duration metric: took 492.498052ms to configureAuth
	I1019 13:14:10.919051  478871 ubuntu.go:206] setting minikube options for container-runtime
	I1019 13:14:10.919271  478871 config.go:182] Loaded profile config "old-k8s-version-842494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 13:14:10.919426  478871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-842494
	I1019 13:14:10.938345  478871 main.go:141] libmachine: Using SSH client type: native
	I1019 13:14:10.938694  478871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1019 13:14:10.938709  478871 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 13:14:07.380617  475820 out.go:252]   - Generating certificates and keys ...
	I1019 13:14:07.380733  475820 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1019 13:14:07.380809  475820 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1019 13:14:07.565976  475820 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 13:14:08.021828  475820 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1019 13:14:08.413744  475820 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1019 13:14:09.041129  475820 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1019 13:14:09.111365  475820 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1019 13:14:09.111753  475820 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-108149] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1019 13:14:09.803196  475820 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1019 13:14:09.803553  475820 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-108149] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1019 13:14:10.034289  475820 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 13:14:10.104399  475820 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 13:14:10.803302  475820 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1019 13:14:10.803601  475820 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 13:14:10.882867  475820 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 13:14:11.047400  475820 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 13:14:11.937049  475820 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 13:14:12.768629  475820 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 13:14:13.097869  475820 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 13:14:13.100389  475820 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 13:14:13.103801  475820 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 13:14:11.319319  478871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 13:14:11.319403  478871 machine.go:96] duration metric: took 4.462574296s to provisionDockerMachine
	I1019 13:14:11.319436  478871 start.go:293] postStartSetup for "old-k8s-version-842494" (driver="docker")
	I1019 13:14:11.319474  478871 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 13:14:11.319584  478871 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 13:14:11.319674  478871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-842494
	I1019 13:14:11.347664  478871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/old-k8s-version-842494/id_rsa Username:docker}
	I1019 13:14:11.464650  478871 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 13:14:11.468795  478871 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 13:14:11.468868  478871 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 13:14:11.468892  478871 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/addons for local assets ...
	I1019 13:14:11.468983  478871 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/files for local assets ...
	I1019 13:14:11.469118  478871 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem -> 2945182.pem in /etc/ssl/certs
	I1019 13:14:11.469295  478871 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 13:14:11.481048  478871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:14:11.506661  478871 start.go:296] duration metric: took 187.19642ms for postStartSetup
	I1019 13:14:11.506821  478871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 13:14:11.506924  478871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-842494
	I1019 13:14:11.531470  478871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/old-k8s-version-842494/id_rsa Username:docker}
	I1019 13:14:11.635208  478871 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 13:14:11.640517  478871 fix.go:56] duration metric: took 5.174365244s for fixHost
	I1019 13:14:11.640544  478871 start.go:83] releasing machines lock for "old-k8s-version-842494", held for 5.1744187s
	I1019 13:14:11.640617  478871 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-842494
	I1019 13:14:11.661946  478871 ssh_runner.go:195] Run: cat /version.json
	I1019 13:14:11.661995  478871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-842494
	I1019 13:14:11.662021  478871 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 13:14:11.662098  478871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-842494
	I1019 13:14:11.713464  478871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/old-k8s-version-842494/id_rsa Username:docker}
	I1019 13:14:11.713593  478871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/old-k8s-version-842494/id_rsa Username:docker}
	I1019 13:14:11.838169  478871 ssh_runner.go:195] Run: systemctl --version
	I1019 13:14:11.929093  478871 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 13:14:11.972546  478871 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 13:14:11.977531  478871 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 13:14:11.977600  478871 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 13:14:11.986146  478871 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 13:14:11.986182  478871 start.go:495] detecting cgroup driver to use...
	I1019 13:14:11.986213  478871 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 13:14:11.986275  478871 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 13:14:12.002807  478871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 13:14:12.019419  478871 docker.go:218] disabling cri-docker service (if available) ...
	I1019 13:14:12.019494  478871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 13:14:12.036971  478871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 13:14:12.051492  478871 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 13:14:12.214496  478871 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 13:14:12.388350  478871 docker.go:234] disabling docker service ...
	I1019 13:14:12.388504  478871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 13:14:12.403715  478871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 13:14:12.418438  478871 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 13:14:12.561102  478871 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 13:14:12.713644  478871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 13:14:12.728805  478871 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 13:14:12.743299  478871 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1019 13:14:12.743414  478871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:14:12.752682  478871 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 13:14:12.752792  478871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:14:12.762250  478871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:14:12.772091  478871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:14:12.781104  478871 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 13:14:12.790820  478871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:14:12.800008  478871 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:14:12.808852  478871 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:14:12.817848  478871 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 13:14:12.827484  478871 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 13:14:12.835533  478871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:14:12.988229  478871 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 13:14:13.155082  478871 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 13:14:13.155207  478871 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 13:14:13.159257  478871 start.go:563] Will wait 60s for crictl version
	I1019 13:14:13.159361  478871 ssh_runner.go:195] Run: which crictl
	I1019 13:14:13.162825  478871 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 13:14:13.210327  478871 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 13:14:13.210498  478871 ssh_runner.go:195] Run: crio --version
	I1019 13:14:13.254056  478871 ssh_runner.go:195] Run: crio --version
	I1019 13:14:13.290937  478871 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1019 13:14:13.293911  478871 cli_runner.go:164] Run: docker network inspect old-k8s-version-842494 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:14:13.314672  478871 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 13:14:13.319562  478871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:14:13.332193  478871 kubeadm.go:883] updating cluster {Name:old-k8s-version-842494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-842494 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 13:14:13.332306  478871 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 13:14:13.332361  478871 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:14:13.378175  478871 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:14:13.378196  478871 crio.go:433] Images already preloaded, skipping extraction
	I1019 13:14:13.378254  478871 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:14:13.416308  478871 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:14:13.416375  478871 cache_images.go:85] Images are preloaded, skipping loading
	I1019 13:14:13.416399  478871 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1019 13:14:13.416529  478871 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-842494 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-842494 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 13:14:13.416649  478871 ssh_runner.go:195] Run: crio config
	I1019 13:14:13.497841  478871 cni.go:84] Creating CNI manager for ""
	I1019 13:14:13.497865  478871 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:14:13.497890  478871 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 13:14:13.497924  478871 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-842494 NodeName:old-k8s-version-842494 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 13:14:13.498083  478871 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-842494"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 13:14:13.498168  478871 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1019 13:14:13.507108  478871 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 13:14:13.507188  478871 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 13:14:13.515446  478871 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1019 13:14:13.530043  478871 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 13:14:13.544491  478871 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1019 13:14:13.570847  478871 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 13:14:13.575067  478871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:14:13.586140  478871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:14:13.719310  478871 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:14:13.735619  478871 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/old-k8s-version-842494 for IP: 192.168.85.2
	I1019 13:14:13.735640  478871 certs.go:195] generating shared ca certs ...
	I1019 13:14:13.735656  478871 certs.go:227] acquiring lock for ca certs: {Name:mk8f2f1c683cf5104ef70f6f3d59bf8f6240d633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:14:13.735812  478871 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key
	I1019 13:14:13.735877  478871 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key
	I1019 13:14:13.735890  478871 certs.go:257] generating profile certs ...
	I1019 13:14:13.736000  478871 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/old-k8s-version-842494/client.key
	I1019 13:14:13.736077  478871 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/old-k8s-version-842494/apiserver.key.0bd8be40
	I1019 13:14:13.736122  478871 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/old-k8s-version-842494/proxy-client.key
	I1019 13:14:13.736256  478871 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem (1338 bytes)
	W1019 13:14:13.736300  478871 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518_empty.pem, impossibly tiny 0 bytes
	I1019 13:14:13.736325  478871 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 13:14:13.736364  478871 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem (1082 bytes)
	I1019 13:14:13.736399  478871 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem (1123 bytes)
	I1019 13:14:13.736426  478871 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem (1679 bytes)
	I1019 13:14:13.736481  478871 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:14:13.737167  478871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 13:14:13.759002  478871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 13:14:13.778320  478871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 13:14:13.798985  478871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 13:14:13.817886  478871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/old-k8s-version-842494/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1019 13:14:13.837328  478871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/old-k8s-version-842494/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 13:14:13.861570  478871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/old-k8s-version-842494/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 13:14:13.892173  478871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/old-k8s-version-842494/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 13:14:13.934056  478871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 13:14:14.005235  478871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem --> /usr/share/ca-certificates/294518.pem (1338 bytes)
	I1019 13:14:14.076985  478871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /usr/share/ca-certificates/2945182.pem (1708 bytes)
	I1019 13:14:14.122272  478871 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 13:14:14.141060  478871 ssh_runner.go:195] Run: openssl version
	I1019 13:14:14.148254  478871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 13:14:14.156883  478871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:14:14.160687  478871 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:14:14.160775  478871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:14:14.203480  478871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 13:14:14.214725  478871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294518.pem && ln -fs /usr/share/ca-certificates/294518.pem /etc/ssl/certs/294518.pem"
	I1019 13:14:14.223843  478871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294518.pem
	I1019 13:14:14.227685  478871 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:20 /usr/share/ca-certificates/294518.pem
	I1019 13:14:14.227799  478871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294518.pem
	I1019 13:14:14.270805  478871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294518.pem /etc/ssl/certs/51391683.0"
	I1019 13:14:14.278547  478871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2945182.pem && ln -fs /usr/share/ca-certificates/2945182.pem /etc/ssl/certs/2945182.pem"
	I1019 13:14:14.287957  478871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2945182.pem
	I1019 13:14:14.291723  478871 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:20 /usr/share/ca-certificates/2945182.pem
	I1019 13:14:14.291861  478871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2945182.pem
	I1019 13:14:14.334511  478871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2945182.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 13:14:14.342178  478871 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 13:14:14.345993  478871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 13:14:14.386959  478871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 13:14:14.428291  478871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 13:14:14.469656  478871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 13:14:14.510538  478871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 13:14:14.561019  478871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 13:14:14.631143  478871 kubeadm.go:400] StartCluster: {Name:old-k8s-version-842494 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-842494 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:14:14.631279  478871 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 13:14:14.631398  478871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 13:14:14.726355  478871 cri.go:89] found id: ""
	I1019 13:14:14.726477  478871 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 13:14:14.748962  478871 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 13:14:14.749024  478871 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 13:14:14.749115  478871 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 13:14:14.792557  478871 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 13:14:14.793013  478871 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-842494" does not appear in /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:14:14.793176  478871 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-292654/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-842494" cluster setting kubeconfig missing "old-k8s-version-842494" context setting]
	I1019 13:14:14.793512  478871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:14:14.794845  478871 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 13:14:14.824464  478871 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 13:14:14.824507  478871 kubeadm.go:601] duration metric: took 75.455109ms to restartPrimaryControlPlane
	I1019 13:14:14.824517  478871 kubeadm.go:402] duration metric: took 193.383761ms to StartCluster
	I1019 13:14:14.824532  478871 settings.go:142] acquiring lock: {Name:mk1099ab6cbf86eca031b5f8e2b43952c9c0f84f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:14:14.824613  478871 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:14:14.825286  478871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:14:14.825545  478871 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:14:14.825866  478871 config.go:182] Loaded profile config "old-k8s-version-842494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 13:14:14.825944  478871 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 13:14:14.826046  478871 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-842494"
	I1019 13:14:14.826075  478871 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-842494"
	I1019 13:14:14.826094  478871 addons.go:69] Setting dashboard=true in profile "old-k8s-version-842494"
	I1019 13:14:14.826114  478871 addons.go:238] Setting addon dashboard=true in "old-k8s-version-842494"
	W1019 13:14:14.826120  478871 addons.go:247] addon dashboard should already be in state true
	I1019 13:14:14.826148  478871 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-842494"
	I1019 13:14:14.826185  478871 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-842494"
	I1019 13:14:14.826161  478871 host.go:66] Checking if "old-k8s-version-842494" exists ...
	I1019 13:14:14.826523  478871 cli_runner.go:164] Run: docker container inspect old-k8s-version-842494 --format={{.State.Status}}
	W1019 13:14:14.826124  478871 addons.go:247] addon storage-provisioner should already be in state true
	I1019 13:14:14.827118  478871 host.go:66] Checking if "old-k8s-version-842494" exists ...
	I1019 13:14:14.827177  478871 cli_runner.go:164] Run: docker container inspect old-k8s-version-842494 --format={{.State.Status}}
	I1019 13:14:14.827519  478871 cli_runner.go:164] Run: docker container inspect old-k8s-version-842494 --format={{.State.Status}}
	I1019 13:14:14.835055  478871 out.go:179] * Verifying Kubernetes components...
	I1019 13:14:14.842776  478871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:14:14.882177  478871 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-842494"
	W1019 13:14:14.882206  478871 addons.go:247] addon default-storageclass should already be in state true
	I1019 13:14:14.882233  478871 host.go:66] Checking if "old-k8s-version-842494" exists ...
	I1019 13:14:14.882643  478871 cli_runner.go:164] Run: docker container inspect old-k8s-version-842494 --format={{.State.Status}}
	I1019 13:14:14.896952  478871 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 13:14:14.902959  478871 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 13:14:14.909753  478871 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 13:14:14.909897  478871 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:14:14.909909  478871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 13:14:14.909972  478871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-842494
	I1019 13:14:14.912932  478871 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 13:14:14.912955  478871 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 13:14:14.913016  478871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-842494
	I1019 13:14:14.936108  478871 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 13:14:14.936131  478871 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 13:14:14.936203  478871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-842494
	I1019 13:14:14.966315  478871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/old-k8s-version-842494/id_rsa Username:docker}
	I1019 13:14:14.971760  478871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/old-k8s-version-842494/id_rsa Username:docker}
	I1019 13:14:14.995533  478871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/old-k8s-version-842494/id_rsa Username:docker}
	I1019 13:14:15.354430  478871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:14:15.464366  478871 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 13:14:15.464443  478871 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 13:14:15.474277  478871 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:14:15.513123  478871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 13:14:15.596460  478871 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 13:14:15.596486  478871 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 13:14:15.703306  478871 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 13:14:15.703328  478871 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 13:14:15.902803  478871 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 13:14:15.902828  478871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 13:14:16.013045  478871 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 13:14:16.013072  478871 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 13:14:16.066169  478871 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 13:14:16.066196  478871 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 13:14:16.104669  478871 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 13:14:16.104693  478871 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 13:14:16.132774  478871 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 13:14:16.132801  478871 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 13:14:16.163526  478871 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 13:14:16.163551  478871 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 13:14:13.107213  475820 out.go:252]   - Booting up control plane ...
	I1019 13:14:13.107328  475820 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 13:14:13.107410  475820 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 13:14:13.107482  475820 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 13:14:13.124523  475820 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 13:14:13.124657  475820 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 13:14:13.132859  475820 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 13:14:13.132966  475820 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 13:14:13.133008  475820 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1019 13:14:13.312799  475820 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 13:14:13.312936  475820 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 13:14:14.317539  475820 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001676364s
	I1019 13:14:14.317652  475820 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 13:14:14.317761  475820 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1019 13:14:14.317855  475820 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 13:14:14.317937  475820 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 13:14:16.200122  478871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 13:14:20.258262  475820 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.94092368s
	I1019 13:14:25.324881  475820 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 11.007709734s
	I1019 13:14:26.046300  475820 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 11.729333363s
	I1019 13:14:26.099328  475820 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 13:14:26.137402  475820 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 13:14:26.156868  475820 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 13:14:26.159116  475820 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-108149 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 13:14:26.173270  475820 kubeadm.go:318] [bootstrap-token] Using token: 2ch1pa.it11f43totc0uxq2
	I1019 13:14:26.176873  475820 out.go:252]   - Configuring RBAC rules ...
	I1019 13:14:26.177010  475820 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 13:14:26.187011  475820 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 13:14:26.206208  475820 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 13:14:26.211547  475820 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 13:14:26.228535  475820 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 13:14:26.235700  475820 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 13:14:26.455183  475820 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 13:14:26.987661  475820 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1019 13:14:27.462068  475820 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1019 13:14:27.464009  475820 kubeadm.go:318] 
	I1019 13:14:27.464094  475820 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1019 13:14:27.464101  475820 kubeadm.go:318] 
	I1019 13:14:27.464190  475820 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1019 13:14:27.464196  475820 kubeadm.go:318] 
	I1019 13:14:27.464223  475820 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1019 13:14:27.464284  475820 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 13:14:27.464337  475820 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 13:14:27.464341  475820 kubeadm.go:318] 
	I1019 13:14:27.464398  475820 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1019 13:14:27.464402  475820 kubeadm.go:318] 
	I1019 13:14:27.464451  475820 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 13:14:27.464456  475820 kubeadm.go:318] 
	I1019 13:14:27.464509  475820 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1019 13:14:27.464587  475820 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 13:14:27.464658  475820 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 13:14:27.464663  475820 kubeadm.go:318] 
	I1019 13:14:27.464751  475820 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 13:14:27.464831  475820 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1019 13:14:27.464836  475820 kubeadm.go:318] 
	I1019 13:14:27.464927  475820 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 2ch1pa.it11f43totc0uxq2 \
	I1019 13:14:27.465035  475820 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0ee0bbb0fbfe8419c71683408bd38502dbf921f3cb179cb0365daeb92f967309 \
	I1019 13:14:27.465056  475820 kubeadm.go:318] 	--control-plane 
	I1019 13:14:27.465060  475820 kubeadm.go:318] 
	I1019 13:14:27.465148  475820 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1019 13:14:27.465157  475820 kubeadm.go:318] 
	I1019 13:14:27.465243  475820 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 2ch1pa.it11f43totc0uxq2 \
	I1019 13:14:27.465349  475820 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0ee0bbb0fbfe8419c71683408bd38502dbf921f3cb179cb0365daeb92f967309 
	I1019 13:14:27.474464  475820 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1019 13:14:27.474703  475820 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1019 13:14:27.474820  475820 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 13:14:27.474842  475820 cni.go:84] Creating CNI manager for ""
	I1019 13:14:27.474849  475820 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:14:27.480640  475820 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 13:14:27.189901  478871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.835389639s)
	I1019 13:14:27.189998  478871 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (11.715694416s)
	I1019 13:14:27.190111  478871 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-842494" to be "Ready" ...
	I1019 13:14:27.190025  478871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.676879401s)
	I1019 13:14:27.238024  478871 node_ready.go:49] node "old-k8s-version-842494" is "Ready"
	I1019 13:14:27.238049  478871 node_ready.go:38] duration metric: took 47.91167ms for node "old-k8s-version-842494" to be "Ready" ...
	I1019 13:14:27.238062  478871 api_server.go:52] waiting for apiserver process to appear ...
	I1019 13:14:27.238119  478871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 13:14:28.065744  478871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.865565836s)
	I1019 13:14:28.065935  478871 api_server.go:72] duration metric: took 13.240353887s to wait for apiserver process to appear ...
	I1019 13:14:28.065959  478871 api_server.go:88] waiting for apiserver healthz status ...
	I1019 13:14:28.065980  478871 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 13:14:28.069099  478871 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-842494 addons enable metrics-server
	
	I1019 13:14:28.071731  478871 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1019 13:14:28.075441  478871 addons.go:514] duration metric: took 13.249476766s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1019 13:14:28.080277  478871 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1019 13:14:28.081912  478871 api_server.go:141] control plane version: v1.28.0
	I1019 13:14:28.081940  478871 api_server.go:131] duration metric: took 15.972603ms to wait for apiserver health ...
	I1019 13:14:28.081949  478871 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 13:14:28.089355  478871 system_pods.go:59] 8 kube-system pods found
	I1019 13:14:28.089398  478871 system_pods.go:61] "coredns-5dd5756b68-5mdz7" [ca5b3ce0-02bc-47cc-b0d7-22b5c87208b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:14:28.089409  478871 system_pods.go:61] "etcd-old-k8s-version-842494" [2bf4f656-402c-4f79-9cfc-4b649bae8703] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:14:28.089416  478871 system_pods.go:61] "kindnet-7lwtw" [31f7a27a-624a-440e-84d7-fc0904e489e0] Running
	I1019 13:14:28.089424  478871 system_pods.go:61] "kube-apiserver-old-k8s-version-842494" [2293b38d-1205-48a7-b69f-9bc04feefd86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:14:28.089438  478871 system_pods.go:61] "kube-controller-manager-old-k8s-version-842494" [f121adf5-56bf-42b2-9679-b3edb6ce28bf] Running
	I1019 13:14:28.089451  478871 system_pods.go:61] "kube-proxy-v7wq7" [11b55bd3-7ea2-4af9-ab7e-13998f6917c5] Running
	I1019 13:14:28.089458  478871 system_pods.go:61] "kube-scheduler-old-k8s-version-842494" [952c5039-937f-4b8c-af62-3862981103f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 13:14:28.089462  478871 system_pods.go:61] "storage-provisioner" [3d912c2a-b19f-4951-993f-64c474ba1b27] Running
	I1019 13:14:28.089474  478871 system_pods.go:74] duration metric: took 7.518664ms to wait for pod list to return data ...
	I1019 13:14:28.089485  478871 default_sa.go:34] waiting for default service account to be created ...
	I1019 13:14:28.093636  478871 default_sa.go:45] found service account: "default"
	I1019 13:14:28.093666  478871 default_sa.go:55] duration metric: took 4.170689ms for default service account to be created ...
	I1019 13:14:28.093725  478871 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 13:14:28.099828  478871 system_pods.go:86] 8 kube-system pods found
	I1019 13:14:28.099877  478871 system_pods.go:89] "coredns-5dd5756b68-5mdz7" [ca5b3ce0-02bc-47cc-b0d7-22b5c87208b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:14:28.099887  478871 system_pods.go:89] "etcd-old-k8s-version-842494" [2bf4f656-402c-4f79-9cfc-4b649bae8703] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:14:28.099894  478871 system_pods.go:89] "kindnet-7lwtw" [31f7a27a-624a-440e-84d7-fc0904e489e0] Running
	I1019 13:14:28.099902  478871 system_pods.go:89] "kube-apiserver-old-k8s-version-842494" [2293b38d-1205-48a7-b69f-9bc04feefd86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:14:28.099906  478871 system_pods.go:89] "kube-controller-manager-old-k8s-version-842494" [f121adf5-56bf-42b2-9679-b3edb6ce28bf] Running
	I1019 13:14:28.099911  478871 system_pods.go:89] "kube-proxy-v7wq7" [11b55bd3-7ea2-4af9-ab7e-13998f6917c5] Running
	I1019 13:14:28.099931  478871 system_pods.go:89] "kube-scheduler-old-k8s-version-842494" [952c5039-937f-4b8c-af62-3862981103f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 13:14:28.099942  478871 system_pods.go:89] "storage-provisioner" [3d912c2a-b19f-4951-993f-64c474ba1b27] Running
	I1019 13:14:28.099951  478871 system_pods.go:126] duration metric: took 6.217633ms to wait for k8s-apps to be running ...
	I1019 13:14:28.099963  478871 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 13:14:28.100032  478871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:14:28.129138  478871 system_svc.go:56] duration metric: took 29.165788ms WaitForService to wait for kubelet
	I1019 13:14:28.129180  478871 kubeadm.go:586] duration metric: took 13.303598325s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:14:28.129199  478871 node_conditions.go:102] verifying NodePressure condition ...
	I1019 13:14:28.133466  478871 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 13:14:28.133513  478871 node_conditions.go:123] node cpu capacity is 2
	I1019 13:14:28.133525  478871 node_conditions.go:105] duration metric: took 4.320345ms to run NodePressure ...
	I1019 13:14:28.133538  478871 start.go:241] waiting for startup goroutines ...
	I1019 13:14:28.133546  478871 start.go:246] waiting for cluster config update ...
	I1019 13:14:28.133556  478871 start.go:255] writing updated cluster config ...
	I1019 13:14:28.133901  478871 ssh_runner.go:195] Run: rm -f paused
	I1019 13:14:28.140837  478871 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:14:28.145356  478871 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5mdz7" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 13:14:30.152656  478871 pod_ready.go:104] pod "coredns-5dd5756b68-5mdz7" is not "Ready", error: <nil>
	I1019 13:14:27.484390  475820 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 13:14:27.501743  475820 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 13:14:27.501762  475820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 13:14:27.535637  475820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 13:14:27.950774  475820 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 13:14:27.950903  475820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:14:27.950968  475820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-108149 minikube.k8s.io/updated_at=2025_10_19T13_14_27_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99 minikube.k8s.io/name=no-preload-108149 minikube.k8s.io/primary=true
	I1019 13:14:28.221838  475820 ops.go:34] apiserver oom_adj: -16
	I1019 13:14:28.222028  475820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:14:28.722202  475820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:14:29.222911  475820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:14:29.723064  475820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:14:30.222597  475820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:14:30.722988  475820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:14:31.222112  475820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:14:31.338807  475820 kubeadm.go:1113] duration metric: took 3.387947608s to wait for elevateKubeSystemPrivileges
	I1019 13:14:31.338841  475820 kubeadm.go:402] duration metric: took 24.384170109s to StartCluster
	I1019 13:14:31.338860  475820 settings.go:142] acquiring lock: {Name:mk1099ab6cbf86eca031b5f8e2b43952c9c0f84f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:14:31.338922  475820 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:14:31.339960  475820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:14:31.340235  475820 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:14:31.340372  475820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 13:14:31.340636  475820 config.go:182] Loaded profile config "no-preload-108149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:14:31.340681  475820 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 13:14:31.340749  475820 addons.go:69] Setting storage-provisioner=true in profile "no-preload-108149"
	I1019 13:14:31.340769  475820 addons.go:238] Setting addon storage-provisioner=true in "no-preload-108149"
	I1019 13:14:31.340794  475820 host.go:66] Checking if "no-preload-108149" exists ...
	I1019 13:14:31.341296  475820 cli_runner.go:164] Run: docker container inspect no-preload-108149 --format={{.State.Status}}
	I1019 13:14:31.341781  475820 addons.go:69] Setting default-storageclass=true in profile "no-preload-108149"
	I1019 13:14:31.341805  475820 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-108149"
	I1019 13:14:31.342094  475820 cli_runner.go:164] Run: docker container inspect no-preload-108149 --format={{.State.Status}}
	I1019 13:14:31.343994  475820 out.go:179] * Verifying Kubernetes components...
	I1019 13:14:31.349005  475820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:14:31.384313  475820 addons.go:238] Setting addon default-storageclass=true in "no-preload-108149"
	I1019 13:14:31.384361  475820 host.go:66] Checking if "no-preload-108149" exists ...
	I1019 13:14:31.384829  475820 cli_runner.go:164] Run: docker container inspect no-preload-108149 --format={{.State.Status}}
	I1019 13:14:31.386877  475820 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 13:14:31.389900  475820 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:14:31.389928  475820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 13:14:31.390001  475820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:14:31.431315  475820 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 13:14:31.431341  475820 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 13:14:31.431408  475820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:14:31.444513  475820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/no-preload-108149/id_rsa Username:docker}
	I1019 13:14:31.481744  475820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/no-preload-108149/id_rsa Username:docker}
	I1019 13:14:31.719006  475820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 13:14:31.745262  475820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:14:31.786986  475820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:14:31.823210  475820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 13:14:32.183597  475820 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1019 13:14:32.185558  475820 node_ready.go:35] waiting up to 6m0s for node "no-preload-108149" to be "Ready" ...
	I1019 13:14:32.552203  475820 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1019 13:14:32.650714  478871 pod_ready.go:104] pod "coredns-5dd5756b68-5mdz7" is not "Ready", error: <nil>
	W1019 13:14:34.651726  478871 pod_ready.go:104] pod "coredns-5dd5756b68-5mdz7" is not "Ready", error: <nil>
	I1019 13:14:32.554984  475820 addons.go:514] duration metric: took 1.214281468s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 13:14:32.687790  475820 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-108149" context rescaled to 1 replicas
	W1019 13:14:34.188798  475820 node_ready.go:57] node "no-preload-108149" has "Ready":"False" status (will retry)
	W1019 13:14:36.688571  475820 node_ready.go:57] node "no-preload-108149" has "Ready":"False" status (will retry)
	W1019 13:14:37.151627  478871 pod_ready.go:104] pod "coredns-5dd5756b68-5mdz7" is not "Ready", error: <nil>
	W1019 13:14:39.151802  478871 pod_ready.go:104] pod "coredns-5dd5756b68-5mdz7" is not "Ready", error: <nil>
	W1019 13:14:41.152198  478871 pod_ready.go:104] pod "coredns-5dd5756b68-5mdz7" is not "Ready", error: <nil>
	W1019 13:14:38.688825  475820 node_ready.go:57] node "no-preload-108149" has "Ready":"False" status (will retry)
	W1019 13:14:41.188749  475820 node_ready.go:57] node "no-preload-108149" has "Ready":"False" status (will retry)
	W1019 13:14:43.153669  478871 pod_ready.go:104] pod "coredns-5dd5756b68-5mdz7" is not "Ready", error: <nil>
	W1019 13:14:45.204943  478871 pod_ready.go:104] pod "coredns-5dd5756b68-5mdz7" is not "Ready", error: <nil>
	W1019 13:14:43.688660  475820 node_ready.go:57] node "no-preload-108149" has "Ready":"False" status (will retry)
	W1019 13:14:45.689297  475820 node_ready.go:57] node "no-preload-108149" has "Ready":"False" status (will retry)
	I1019 13:14:47.191084  475820 node_ready.go:49] node "no-preload-108149" is "Ready"
	I1019 13:14:47.191117  475820 node_ready.go:38] duration metric: took 15.005537627s for node "no-preload-108149" to be "Ready" ...
	I1019 13:14:47.191130  475820 api_server.go:52] waiting for apiserver process to appear ...
	I1019 13:14:47.191191  475820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 13:14:47.225454  475820 api_server.go:72] duration metric: took 15.885176385s to wait for apiserver process to appear ...
	I1019 13:14:47.225500  475820 api_server.go:88] waiting for apiserver healthz status ...
	I1019 13:14:47.225641  475820 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 13:14:47.250898  475820 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 13:14:47.252113  475820 api_server.go:141] control plane version: v1.34.1
	I1019 13:14:47.252157  475820 api_server.go:131] duration metric: took 26.648586ms to wait for apiserver health ...
	I1019 13:14:47.252184  475820 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 13:14:47.260086  475820 system_pods.go:59] 8 kube-system pods found
	I1019 13:14:47.260185  475820 system_pods.go:61] "coredns-66bc5c9577-qp7k5" [0f0731c8-758f-4a89-9d62-19ff52f8d9ee] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:14:47.260208  475820 system_pods.go:61] "etcd-no-preload-108149" [288fa476-5552-477a-8958-75fb017c1f15] Running
	I1019 13:14:47.260240  475820 system_pods.go:61] "kindnet-s5wgc" [eecfcd8e-961b-4469-8bab-a15f4053fcae] Running
	I1019 13:14:47.260263  475820 system_pods.go:61] "kube-apiserver-no-preload-108149" [7fc22236-bfa6-43f2-888e-899c1802dccf] Running
	I1019 13:14:47.260284  475820 system_pods.go:61] "kube-controller-manager-no-preload-108149" [589ab894-5b6a-4901-ae64-033a1841821c] Running
	I1019 13:14:47.260305  475820 system_pods.go:61] "kube-proxy-qfr27" [12f5f5aa-7552-44bc-9a49-879a274e9a57] Running
	I1019 13:14:47.260338  475820 system_pods.go:61] "kube-scheduler-no-preload-108149" [fd497e0f-9bce-4bda-850f-ddc249fc05c3] Running
	I1019 13:14:47.260363  475820 system_pods.go:61] "storage-provisioner" [7de7f3d6-6098-48a3-966a-f0a82622bdeb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 13:14:47.260383  475820 system_pods.go:74] duration metric: took 8.191724ms to wait for pod list to return data ...
	I1019 13:14:47.260406  475820 default_sa.go:34] waiting for default service account to be created ...
	I1019 13:14:47.263323  475820 default_sa.go:45] found service account: "default"
	I1019 13:14:47.263387  475820 default_sa.go:55] duration metric: took 2.959072ms for default service account to be created ...
	I1019 13:14:47.263411  475820 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 13:14:47.266927  475820 system_pods.go:86] 8 kube-system pods found
	I1019 13:14:47.267008  475820 system_pods.go:89] "coredns-66bc5c9577-qp7k5" [0f0731c8-758f-4a89-9d62-19ff52f8d9ee] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:14:47.267030  475820 system_pods.go:89] "etcd-no-preload-108149" [288fa476-5552-477a-8958-75fb017c1f15] Running
	I1019 13:14:47.267068  475820 system_pods.go:89] "kindnet-s5wgc" [eecfcd8e-961b-4469-8bab-a15f4053fcae] Running
	I1019 13:14:47.267093  475820 system_pods.go:89] "kube-apiserver-no-preload-108149" [7fc22236-bfa6-43f2-888e-899c1802dccf] Running
	I1019 13:14:47.267113  475820 system_pods.go:89] "kube-controller-manager-no-preload-108149" [589ab894-5b6a-4901-ae64-033a1841821c] Running
	I1019 13:14:47.267134  475820 system_pods.go:89] "kube-proxy-qfr27" [12f5f5aa-7552-44bc-9a49-879a274e9a57] Running
	I1019 13:14:47.267167  475820 system_pods.go:89] "kube-scheduler-no-preload-108149" [fd497e0f-9bce-4bda-850f-ddc249fc05c3] Running
	I1019 13:14:47.267191  475820 system_pods.go:89] "storage-provisioner" [7de7f3d6-6098-48a3-966a-f0a82622bdeb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 13:14:47.267239  475820 retry.go:31] will retry after 298.654289ms: missing components: kube-dns
	I1019 13:14:47.576993  475820 system_pods.go:86] 8 kube-system pods found
	I1019 13:14:47.577080  475820 system_pods.go:89] "coredns-66bc5c9577-qp7k5" [0f0731c8-758f-4a89-9d62-19ff52f8d9ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:14:47.577104  475820 system_pods.go:89] "etcd-no-preload-108149" [288fa476-5552-477a-8958-75fb017c1f15] Running
	I1019 13:14:47.577143  475820 system_pods.go:89] "kindnet-s5wgc" [eecfcd8e-961b-4469-8bab-a15f4053fcae] Running
	I1019 13:14:47.577168  475820 system_pods.go:89] "kube-apiserver-no-preload-108149" [7fc22236-bfa6-43f2-888e-899c1802dccf] Running
	I1019 13:14:47.577188  475820 system_pods.go:89] "kube-controller-manager-no-preload-108149" [589ab894-5b6a-4901-ae64-033a1841821c] Running
	I1019 13:14:47.577209  475820 system_pods.go:89] "kube-proxy-qfr27" [12f5f5aa-7552-44bc-9a49-879a274e9a57] Running
	I1019 13:14:47.577230  475820 system_pods.go:89] "kube-scheduler-no-preload-108149" [fd497e0f-9bce-4bda-850f-ddc249fc05c3] Running
	I1019 13:14:47.577549  475820 system_pods.go:89] "storage-provisioner" [7de7f3d6-6098-48a3-966a-f0a82622bdeb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 13:14:47.587307  475820 system_pods.go:126] duration metric: took 314.155362ms to wait for k8s-apps to be running ...
	I1019 13:14:47.587400  475820 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 13:14:47.587510  475820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:14:47.620491  475820 system_svc.go:56] duration metric: took 33.083954ms WaitForService to wait for kubelet
	I1019 13:14:47.620566  475820 kubeadm.go:586] duration metric: took 16.280295579s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:14:47.620605  475820 node_conditions.go:102] verifying NodePressure condition ...
	I1019 13:14:47.624164  475820 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 13:14:47.624246  475820 node_conditions.go:123] node cpu capacity is 2
	I1019 13:14:47.624276  475820 node_conditions.go:105] duration metric: took 3.634707ms to run NodePressure ...
	I1019 13:14:47.624303  475820 start.go:241] waiting for startup goroutines ...
	I1019 13:14:47.624335  475820 start.go:246] waiting for cluster config update ...
	I1019 13:14:47.624414  475820 start.go:255] writing updated cluster config ...
	I1019 13:14:47.624743  475820 ssh_runner.go:195] Run: rm -f paused
	I1019 13:14:47.630249  475820 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:14:47.639399  475820 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qp7k5" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:14:48.644780  475820 pod_ready.go:94] pod "coredns-66bc5c9577-qp7k5" is "Ready"
	I1019 13:14:48.644812  475820 pod_ready.go:86] duration metric: took 1.005337189s for pod "coredns-66bc5c9577-qp7k5" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:14:48.651483  475820 pod_ready.go:83] waiting for pod "etcd-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:14:48.660228  475820 pod_ready.go:94] pod "etcd-no-preload-108149" is "Ready"
	I1019 13:14:48.660301  475820 pod_ready.go:86] duration metric: took 8.741352ms for pod "etcd-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:14:48.662880  475820 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:14:48.670507  475820 pod_ready.go:94] pod "kube-apiserver-no-preload-108149" is "Ready"
	I1019 13:14:48.670535  475820 pod_ready.go:86] duration metric: took 7.590723ms for pod "kube-apiserver-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:14:48.672843  475820 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:14:48.843438  475820 pod_ready.go:94] pod "kube-controller-manager-no-preload-108149" is "Ready"
	I1019 13:14:48.843468  475820 pod_ready.go:86] duration metric: took 170.596102ms for pod "kube-controller-manager-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:14:49.042521  475820 pod_ready.go:83] waiting for pod "kube-proxy-qfr27" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:14:49.442695  475820 pod_ready.go:94] pod "kube-proxy-qfr27" is "Ready"
	I1019 13:14:49.442721  475820 pod_ready.go:86] duration metric: took 400.172817ms for pod "kube-proxy-qfr27" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:14:49.643293  475820 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:14:50.042877  475820 pod_ready.go:94] pod "kube-scheduler-no-preload-108149" is "Ready"
	I1019 13:14:50.042908  475820 pod_ready.go:86] duration metric: took 399.591632ms for pod "kube-scheduler-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:14:50.042920  475820 pod_ready.go:40] duration metric: took 2.412593886s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:14:50.102739  475820 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 13:14:50.106106  475820 out.go:179] * Done! kubectl is now configured to use "no-preload-108149" cluster and "default" namespace by default
	W1019 13:14:47.659151  478871 pod_ready.go:104] pod "coredns-5dd5756b68-5mdz7" is not "Ready", error: <nil>
	W1019 13:14:50.152193  478871 pod_ready.go:104] pod "coredns-5dd5756b68-5mdz7" is not "Ready", error: <nil>
	W1019 13:14:52.651757  478871 pod_ready.go:104] pod "coredns-5dd5756b68-5mdz7" is not "Ready", error: <nil>
	W1019 13:14:55.151864  478871 pod_ready.go:104] pod "coredns-5dd5756b68-5mdz7" is not "Ready", error: <nil>
	W1019 13:14:57.152580  478871 pod_ready.go:104] pod "coredns-5dd5756b68-5mdz7" is not "Ready", error: <nil>
	W1019 13:14:59.159322  478871 pod_ready.go:104] pod "coredns-5dd5756b68-5mdz7" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 19 13:14:47 no-preload-108149 crio[837]: time="2025-10-19T13:14:47.414245616Z" level=info msg="Starting container: 5678114958a6343951d5f49f0c4e9f3bb36bbd9a4fbd784b46550047e41888ab" id=35fd41c7-f5ea-4bf7-b6fe-cdcda7f424cc name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:14:47 no-preload-108149 crio[837]: time="2025-10-19T13:14:47.421963233Z" level=info msg="Started container" PID=2465 containerID=8dee1b26c5b029d23d535f3173cd270bee40906b85802580c2dfe97cf41a9835 description=kube-system/coredns-66bc5c9577-qp7k5/coredns id=6d868c61-f882-4cdc-8da6-8a33dd78a56c name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a347819f09ca728983c060fd96266c6fc3b965e7e8613421ef31bc6a648fb48
	Oct 19 13:14:47 no-preload-108149 crio[837]: time="2025-10-19T13:14:47.4275494Z" level=info msg="Started container" PID=2460 containerID=5678114958a6343951d5f49f0c4e9f3bb36bbd9a4fbd784b46550047e41888ab description=kube-system/storage-provisioner/storage-provisioner id=35fd41c7-f5ea-4bf7-b6fe-cdcda7f424cc name=/runtime.v1.RuntimeService/StartContainer sandboxID=768614507ed2848a44e1054b2b20cf0ece8b005ba2669ca6d141bc6a4075cdf6
	Oct 19 13:14:50 no-preload-108149 crio[837]: time="2025-10-19T13:14:50.633756432Z" level=info msg="Running pod sandbox: default/busybox/POD" id=0864a18d-f5e8-4bba-b01f-da23f08cb8a9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:14:50 no-preload-108149 crio[837]: time="2025-10-19T13:14:50.633843063Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:14:50 no-preload-108149 crio[837]: time="2025-10-19T13:14:50.639513202Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d73fce9e4a5dff4f997b7ea39713ab7b7045d792d02c7de4a16b5d240a6422d8 UID:dc85de5e-425e-47b2-916e-f27d88458ea3 NetNS:/var/run/netns/084e9559-02f9-4708-a6a7-2b8ffe835122 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400155e4c0}] Aliases:map[]}"
	Oct 19 13:14:50 no-preload-108149 crio[837]: time="2025-10-19T13:14:50.639551496Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 19 13:14:50 no-preload-108149 crio[837]: time="2025-10-19T13:14:50.655471094Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d73fce9e4a5dff4f997b7ea39713ab7b7045d792d02c7de4a16b5d240a6422d8 UID:dc85de5e-425e-47b2-916e-f27d88458ea3 NetNS:/var/run/netns/084e9559-02f9-4708-a6a7-2b8ffe835122 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400155e4c0}] Aliases:map[]}"
	Oct 19 13:14:50 no-preload-108149 crio[837]: time="2025-10-19T13:14:50.655664885Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 19 13:14:50 no-preload-108149 crio[837]: time="2025-10-19T13:14:50.6584482Z" level=info msg="Ran pod sandbox d73fce9e4a5dff4f997b7ea39713ab7b7045d792d02c7de4a16b5d240a6422d8 with infra container: default/busybox/POD" id=0864a18d-f5e8-4bba-b01f-da23f08cb8a9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:14:50 no-preload-108149 crio[837]: time="2025-10-19T13:14:50.661054307Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3ff0e34b-b0fc-44d1-bdf7-dcd17b803d0b name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:14:50 no-preload-108149 crio[837]: time="2025-10-19T13:14:50.661321182Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3ff0e34b-b0fc-44d1-bdf7-dcd17b803d0b name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:14:50 no-preload-108149 crio[837]: time="2025-10-19T13:14:50.66144065Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=3ff0e34b-b0fc-44d1-bdf7-dcd17b803d0b name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:14:50 no-preload-108149 crio[837]: time="2025-10-19T13:14:50.664156059Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b3074851-faad-4054-8c6a-a3e57413c676 name=/runtime.v1.ImageService/PullImage
	Oct 19 13:14:50 no-preload-108149 crio[837]: time="2025-10-19T13:14:50.667935966Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 13:14:52 no-preload-108149 crio[837]: time="2025-10-19T13:14:52.575971812Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=b3074851-faad-4054-8c6a-a3e57413c676 name=/runtime.v1.ImageService/PullImage
	Oct 19 13:14:52 no-preload-108149 crio[837]: time="2025-10-19T13:14:52.57692129Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=668d3d08-a529-43d4-bc13-94f2756dc364 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:14:52 no-preload-108149 crio[837]: time="2025-10-19T13:14:52.580167946Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e8184edb-16d6-408e-8123-edc93fc1ffd3 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:14:52 no-preload-108149 crio[837]: time="2025-10-19T13:14:52.586705847Z" level=info msg="Creating container: default/busybox/busybox" id=33d0ea68-b46e-47f6-aa0f-e076a169f920 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:14:52 no-preload-108149 crio[837]: time="2025-10-19T13:14:52.587642861Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:14:52 no-preload-108149 crio[837]: time="2025-10-19T13:14:52.592206081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:14:52 no-preload-108149 crio[837]: time="2025-10-19T13:14:52.592648802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:14:52 no-preload-108149 crio[837]: time="2025-10-19T13:14:52.608289077Z" level=info msg="Created container d65863b95ff5bb80357206c07fd7828ad7dafba020320baa59668e7fffc715dd: default/busybox/busybox" id=33d0ea68-b46e-47f6-aa0f-e076a169f920 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:14:52 no-preload-108149 crio[837]: time="2025-10-19T13:14:52.610874425Z" level=info msg="Starting container: d65863b95ff5bb80357206c07fd7828ad7dafba020320baa59668e7fffc715dd" id=d4a41a9d-f56a-4f45-965a-917c50fb20d1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:14:52 no-preload-108149 crio[837]: time="2025-10-19T13:14:52.613486414Z" level=info msg="Started container" PID=2525 containerID=d65863b95ff5bb80357206c07fd7828ad7dafba020320baa59668e7fffc715dd description=default/busybox/busybox id=d4a41a9d-f56a-4f45-965a-917c50fb20d1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d73fce9e4a5dff4f997b7ea39713ab7b7045d792d02c7de4a16b5d240a6422d8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d65863b95ff5b       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago       Running             busybox                   0                   d73fce9e4a5df       busybox                                     default
	8dee1b26c5b02       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago      Running             coredns                   0                   7a347819f09ca       coredns-66bc5c9577-qp7k5                    kube-system
	5678114958a63       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      14 seconds ago      Running             storage-provisioner       0                   768614507ed28       storage-provisioner                         kube-system
	2ddd1682b3a7a       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   c90d5f1de2db9       kindnet-s5wgc                               kube-system
	f2e7d8b635891       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      28 seconds ago      Running             kube-proxy                0                   f2d9ca2250392       kube-proxy-qfr27                            kube-system
	b8fdb8007986a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      47 seconds ago      Running             kube-controller-manager   0                   fa1a485be9399       kube-controller-manager-no-preload-108149   kube-system
	f21b60c4652ff       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      47 seconds ago      Running             kube-scheduler            0                   8b18e116d624b       kube-scheduler-no-preload-108149            kube-system
	38186359b4359       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      47 seconds ago      Running             etcd                      0                   9278d195bef6f       etcd-no-preload-108149                      kube-system
	717e70c6f4708       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      47 seconds ago      Running             kube-apiserver            0                   2067d526a6d05       kube-apiserver-no-preload-108149            kube-system
	
	
	==> coredns [8dee1b26c5b029d23d535f3173cd270bee40906b85802580c2dfe97cf41a9835] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50849 - 51301 "HINFO IN 5797436841083495765.5390739486676253358. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026973218s
	
	
	==> describe nodes <==
	Name:               no-preload-108149
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-108149
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=no-preload-108149
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T13_14_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 13:14:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-108149
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 13:14:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 13:14:58 +0000   Sun, 19 Oct 2025 13:14:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 13:14:58 +0000   Sun, 19 Oct 2025 13:14:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 13:14:58 +0000   Sun, 19 Oct 2025 13:14:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 13:14:58 +0000   Sun, 19 Oct 2025 13:14:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-108149
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                a4d8c0d2-63fb-4a48-994a-8850e6b21b64
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-qp7k5                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     31s
	  kube-system                 etcd-no-preload-108149                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         39s
	  kube-system                 kindnet-s5wgc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-no-preload-108149             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-no-preload-108149    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-qfr27                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-no-preload-108149             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Warning  CgroupV1                 48s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node no-preload-108149 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node no-preload-108149 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     48s (x8 over 48s)  kubelet          Node no-preload-108149 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-108149 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-108149 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-108149 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           32s                node-controller  Node no-preload-108149 event: Registered Node no-preload-108149 in Controller
	  Normal   NodeReady                16s                kubelet          Node no-preload-108149 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct19 12:50] overlayfs: idmapped layers are currently not supported
	[Oct19 12:51] overlayfs: idmapped layers are currently not supported
	[Oct19 12:52] overlayfs: idmapped layers are currently not supported
	[Oct19 12:53] overlayfs: idmapped layers are currently not supported
	[Oct19 12:54] overlayfs: idmapped layers are currently not supported
	[Oct19 12:56] overlayfs: idmapped layers are currently not supported
	[ +16.315179] overlayfs: idmapped layers are currently not supported
	[ +11.914063] overlayfs: idmapped layers are currently not supported
	[Oct19 12:57] overlayfs: idmapped layers are currently not supported
	[Oct19 12:58] overlayfs: idmapped layers are currently not supported
	[ +48.481184] overlayfs: idmapped layers are currently not supported
	[Oct19 12:59] overlayfs: idmapped layers are currently not supported
	[Oct19 13:00] overlayfs: idmapped layers are currently not supported
	[Oct19 13:01] overlayfs: idmapped layers are currently not supported
	[Oct19 13:04] overlayfs: idmapped layers are currently not supported
	[Oct19 13:05] overlayfs: idmapped layers are currently not supported
	[Oct19 13:06] overlayfs: idmapped layers are currently not supported
	[Oct19 13:08] overlayfs: idmapped layers are currently not supported
	[ +38.759554] overlayfs: idmapped layers are currently not supported
	[Oct19 13:10] overlayfs: idmapped layers are currently not supported
	[Oct19 13:11] overlayfs: idmapped layers are currently not supported
	[Oct19 13:12] overlayfs: idmapped layers are currently not supported
	[ +39.991818] overlayfs: idmapped layers are currently not supported
	[Oct19 13:13] overlayfs: idmapped layers are currently not supported
	[Oct19 13:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [38186359b435940eb88fe181b722e115710b0a9f60a0159a468219ac98d97f6a] <==
	{"level":"warn","ts":"2025-10-19T13:14:19.878790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:19.901941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:19.999998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.036915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.086588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.129936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.207502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.236302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.293583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.335049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.383818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.408941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.452005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.499962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.512738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.548838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.582509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.601757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.641964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.673819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.714894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.762302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.795302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:20.819564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:14:21.002573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33090","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:15:02 up  2:57,  0 user,  load average: 3.46, 3.01, 2.63
	Linux no-preload-108149 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2ddd1682b3a7a8839fd304b82a613bfcbbda303a3d661d1e3f9e501506b840bb] <==
	I1019 13:14:36.313896       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:14:36.405845       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 13:14:36.406014       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:14:36.406038       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:14:36.406051       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:14:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:14:36.516386       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:14:36.605753       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:14:36.610769       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:14:36.610986       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 13:14:36.811785       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 13:14:36.811979       1 metrics.go:72] Registering metrics
	I1019 13:14:36.812085       1 controller.go:711] "Syncing nftables rules"
	I1019 13:14:46.528973       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:14:46.529102       1 main.go:301] handling current node
	I1019 13:14:56.517367       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:14:56.517485       1 main.go:301] handling current node
	
	
	==> kube-apiserver [717e70c6f47085cbf413c705e7cc4b6dd3ba7e45701cffc5d4b234a08090b9ec] <==
	I1019 13:14:22.633202       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 13:14:22.633812       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 13:14:22.632727       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 13:14:22.633113       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1019 13:14:22.688903       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:14:22.688958       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1019 13:14:22.774040       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:14:22.782110       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 13:14:23.231154       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 13:14:23.277928       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 13:14:23.277948       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 13:14:24.860788       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 13:14:24.958993       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 13:14:25.143124       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 13:14:25.158609       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1019 13:14:25.160172       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 13:14:25.199164       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 13:14:25.355391       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 13:14:26.949566       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 13:14:26.986663       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 13:14:27.003866       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 13:14:31.260331       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 13:14:31.507996       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1019 13:14:31.546724       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:14:31.602863       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [b8fdb8007986a2a127860fde468310a38712a6acc4804482f5c8034f4fdb5728] <==
	I1019 13:14:30.403966       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 13:14:30.405049       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 13:14:30.405106       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 13:14:30.407893       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 13:14:30.407912       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 13:14:30.407958       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 13:14:30.407988       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 13:14:30.408000       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 13:14:30.408005       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 13:14:30.408024       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 13:14:30.408186       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 13:14:30.408214       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 13:14:30.412173       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:14:30.413466       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:14:30.414534       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 13:14:30.414626       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 13:14:30.415326       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-108149"
	I1019 13:14:30.415380       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 13:14:30.417875       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 13:14:30.418590       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-108149" podCIDRs=["10.244.0.0/24"]
	I1019 13:14:30.425753       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 13:14:30.426964       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:14:30.426981       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 13:14:30.426989       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 13:14:50.419112       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f2e7d8b63589186ad448af19e9dd2d247791d520b5e0d81daf0f8df9277237f6] <==
	I1019 13:14:33.612945       1 server_linux.go:53] "Using iptables proxy"
	I1019 13:14:33.703248       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 13:14:33.803586       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 13:14:33.803690       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 13:14:33.803793       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 13:14:33.822605       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:14:33.822666       1 server_linux.go:132] "Using iptables Proxier"
	I1019 13:14:33.826990       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 13:14:33.827306       1 server.go:527] "Version info" version="v1.34.1"
	I1019 13:14:33.827331       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:14:33.828515       1 config.go:200] "Starting service config controller"
	I1019 13:14:33.828541       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 13:14:33.828580       1 config.go:106] "Starting endpoint slice config controller"
	I1019 13:14:33.828584       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 13:14:33.828613       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 13:14:33.828624       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 13:14:33.832837       1 config.go:309] "Starting node config controller"
	I1019 13:14:33.832860       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 13:14:33.832868       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 13:14:33.928687       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 13:14:33.928722       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 13:14:33.928773       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f21b60c4652ffd9425089d3514a124bb0887b4cf6695314de2be780cfb29d9b4] <==
	I1019 13:14:20.784893       1 serving.go:386] Generated self-signed cert in-memory
	I1019 13:14:25.961509       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 13:14:25.962179       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:14:25.978732       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 13:14:25.979541       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 13:14:25.979777       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 13:14:25.980699       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 13:14:25.981326       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:14:25.981345       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:14:25.981362       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 13:14:25.981368       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 13:14:26.081232       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 13:14:26.081367       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:14:26.081396       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 19 13:14:31 no-preload-108149 kubelet[1991]: I1019 13:14:31.751521    1991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12f5f5aa-7552-44bc-9a49-879a274e9a57-lib-modules\") pod \"kube-proxy-qfr27\" (UID: \"12f5f5aa-7552-44bc-9a49-879a274e9a57\") " pod="kube-system/kube-proxy-qfr27"
	Oct 19 13:14:31 no-preload-108149 kubelet[1991]: I1019 13:14:31.751575    1991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eecfcd8e-961b-4469-8bab-a15f4053fcae-cni-cfg\") pod \"kindnet-s5wgc\" (UID: \"eecfcd8e-961b-4469-8bab-a15f4053fcae\") " pod="kube-system/kindnet-s5wgc"
	Oct 19 13:14:31 no-preload-108149 kubelet[1991]: I1019 13:14:31.751594    1991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eecfcd8e-961b-4469-8bab-a15f4053fcae-xtables-lock\") pod \"kindnet-s5wgc\" (UID: \"eecfcd8e-961b-4469-8bab-a15f4053fcae\") " pod="kube-system/kindnet-s5wgc"
	Oct 19 13:14:31 no-preload-108149 kubelet[1991]: I1019 13:14:31.751609    1991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eecfcd8e-961b-4469-8bab-a15f4053fcae-lib-modules\") pod \"kindnet-s5wgc\" (UID: \"eecfcd8e-961b-4469-8bab-a15f4053fcae\") " pod="kube-system/kindnet-s5wgc"
	Oct 19 13:14:31 no-preload-108149 kubelet[1991]: I1019 13:14:31.751625    1991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/12f5f5aa-7552-44bc-9a49-879a274e9a57-kube-proxy\") pod \"kube-proxy-qfr27\" (UID: \"12f5f5aa-7552-44bc-9a49-879a274e9a57\") " pod="kube-system/kube-proxy-qfr27"
	Oct 19 13:14:31 no-preload-108149 kubelet[1991]: I1019 13:14:31.751645    1991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cljcm\" (UniqueName: \"kubernetes.io/projected/eecfcd8e-961b-4469-8bab-a15f4053fcae-kube-api-access-cljcm\") pod \"kindnet-s5wgc\" (UID: \"eecfcd8e-961b-4469-8bab-a15f4053fcae\") " pod="kube-system/kindnet-s5wgc"
	Oct 19 13:14:31 no-preload-108149 kubelet[1991]: I1019 13:14:31.751660    1991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12f5f5aa-7552-44bc-9a49-879a274e9a57-xtables-lock\") pod \"kube-proxy-qfr27\" (UID: \"12f5f5aa-7552-44bc-9a49-879a274e9a57\") " pod="kube-system/kube-proxy-qfr27"
	Oct 19 13:14:31 no-preload-108149 kubelet[1991]: I1019 13:14:31.751676    1991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltxw4\" (UniqueName: \"kubernetes.io/projected/12f5f5aa-7552-44bc-9a49-879a274e9a57-kube-api-access-ltxw4\") pod \"kube-proxy-qfr27\" (UID: \"12f5f5aa-7552-44bc-9a49-879a274e9a57\") " pod="kube-system/kube-proxy-qfr27"
	Oct 19 13:14:32 no-preload-108149 kubelet[1991]: E1019 13:14:32.909211    1991 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 19 13:14:32 no-preload-108149 kubelet[1991]: E1019 13:14:32.909266    1991 projected.go:196] Error preparing data for projected volume kube-api-access-ltxw4 for pod kube-system/kube-proxy-qfr27: failed to sync configmap cache: timed out waiting for the condition
	Oct 19 13:14:32 no-preload-108149 kubelet[1991]: E1019 13:14:32.909360    1991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12f5f5aa-7552-44bc-9a49-879a274e9a57-kube-api-access-ltxw4 podName:12f5f5aa-7552-44bc-9a49-879a274e9a57 nodeName:}" failed. No retries permitted until 2025-10-19 13:14:33.40933517 +0000 UTC m=+6.527961179 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ltxw4" (UniqueName: "kubernetes.io/projected/12f5f5aa-7552-44bc-9a49-879a274e9a57-kube-api-access-ltxw4") pod "kube-proxy-qfr27" (UID: "12f5f5aa-7552-44bc-9a49-879a274e9a57") : failed to sync configmap cache: timed out waiting for the condition
	Oct 19 13:14:32 no-preload-108149 kubelet[1991]: E1019 13:14:32.914515    1991 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 19 13:14:32 no-preload-108149 kubelet[1991]: E1019 13:14:32.914556    1991 projected.go:196] Error preparing data for projected volume kube-api-access-cljcm for pod kube-system/kindnet-s5wgc: failed to sync configmap cache: timed out waiting for the condition
	Oct 19 13:14:32 no-preload-108149 kubelet[1991]: E1019 13:14:32.914628    1991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eecfcd8e-961b-4469-8bab-a15f4053fcae-kube-api-access-cljcm podName:eecfcd8e-961b-4469-8bab-a15f4053fcae nodeName:}" failed. No retries permitted until 2025-10-19 13:14:33.41460669 +0000 UTC m=+6.533232708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cljcm" (UniqueName: "kubernetes.io/projected/eecfcd8e-961b-4469-8bab-a15f4053fcae-kube-api-access-cljcm") pod "kindnet-s5wgc" (UID: "eecfcd8e-961b-4469-8bab-a15f4053fcae") : failed to sync configmap cache: timed out waiting for the condition
	Oct 19 13:14:33 no-preload-108149 kubelet[1991]: I1019 13:14:33.464769    1991 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 19 13:14:35 no-preload-108149 kubelet[1991]: I1019 13:14:35.674110    1991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qfr27" podStartSLOduration=4.674094216 podStartE2EDuration="4.674094216s" podCreationTimestamp="2025-10-19 13:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:14:34.464776911 +0000 UTC m=+7.583402954" watchObservedRunningTime="2025-10-19 13:14:35.674094216 +0000 UTC m=+8.792720225"
	Oct 19 13:14:46 no-preload-108149 kubelet[1991]: I1019 13:14:46.902143    1991 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 19 13:14:46 no-preload-108149 kubelet[1991]: I1019 13:14:46.958153    1991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-s5wgc" podStartSLOduration=13.26569285 podStartE2EDuration="15.958114686s" podCreationTimestamp="2025-10-19 13:14:31 +0000 UTC" firstStartedPulling="2025-10-19 13:14:33.514594981 +0000 UTC m=+6.633220991" lastFinishedPulling="2025-10-19 13:14:36.207016809 +0000 UTC m=+9.325642827" observedRunningTime="2025-10-19 13:14:36.49391929 +0000 UTC m=+9.612545308" watchObservedRunningTime="2025-10-19 13:14:46.958114686 +0000 UTC m=+20.076740704"
	Oct 19 13:14:47 no-preload-108149 kubelet[1991]: I1019 13:14:47.083453    1991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7de7f3d6-6098-48a3-966a-f0a82622bdeb-tmp\") pod \"storage-provisioner\" (UID: \"7de7f3d6-6098-48a3-966a-f0a82622bdeb\") " pod="kube-system/storage-provisioner"
	Oct 19 13:14:47 no-preload-108149 kubelet[1991]: I1019 13:14:47.083654    1991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg65l\" (UniqueName: \"kubernetes.io/projected/7de7f3d6-6098-48a3-966a-f0a82622bdeb-kube-api-access-dg65l\") pod \"storage-provisioner\" (UID: \"7de7f3d6-6098-48a3-966a-f0a82622bdeb\") " pod="kube-system/storage-provisioner"
	Oct 19 13:14:47 no-preload-108149 kubelet[1991]: I1019 13:14:47.083737    1991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xn4n\" (UniqueName: \"kubernetes.io/projected/0f0731c8-758f-4a89-9d62-19ff52f8d9ee-kube-api-access-8xn4n\") pod \"coredns-66bc5c9577-qp7k5\" (UID: \"0f0731c8-758f-4a89-9d62-19ff52f8d9ee\") " pod="kube-system/coredns-66bc5c9577-qp7k5"
	Oct 19 13:14:47 no-preload-108149 kubelet[1991]: I1019 13:14:47.083822    1991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f0731c8-758f-4a89-9d62-19ff52f8d9ee-config-volume\") pod \"coredns-66bc5c9577-qp7k5\" (UID: \"0f0731c8-758f-4a89-9d62-19ff52f8d9ee\") " pod="kube-system/coredns-66bc5c9577-qp7k5"
	Oct 19 13:14:47 no-preload-108149 kubelet[1991]: I1019 13:14:47.540572    1991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qp7k5" podStartSLOduration=16.54055521 podStartE2EDuration="16.54055521s" podCreationTimestamp="2025-10-19 13:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:14:47.539898216 +0000 UTC m=+20.658524226" watchObservedRunningTime="2025-10-19 13:14:47.54055521 +0000 UTC m=+20.659181228"
	Oct 19 13:14:48 no-preload-108149 kubelet[1991]: I1019 13:14:48.544624    1991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.544603823 podStartE2EDuration="16.544603823s" podCreationTimestamp="2025-10-19 13:14:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:14:47.611947155 +0000 UTC m=+20.730573181" watchObservedRunningTime="2025-10-19 13:14:48.544603823 +0000 UTC m=+21.663229833"
	Oct 19 13:14:50 no-preload-108149 kubelet[1991]: I1019 13:14:50.425181    1991 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rck74\" (UniqueName: \"kubernetes.io/projected/dc85de5e-425e-47b2-916e-f27d88458ea3-kube-api-access-rck74\") pod \"busybox\" (UID: \"dc85de5e-425e-47b2-916e-f27d88458ea3\") " pod="default/busybox"
	
	
	==> storage-provisioner [5678114958a6343951d5f49f0c4e9f3bb36bbd9a4fbd784b46550047e41888ab] <==
	I1019 13:14:47.445980       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 13:14:47.471827       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 13:14:47.471959       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 13:14:47.481743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:14:47.494994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 13:14:47.495278       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 13:14:47.495773       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a5ba3f0-b17e-4468-873b-e2df26dbba12", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-108149_14d618d0-8fc9-4f8a-8ef2-8260a4dfd12a became leader
	I1019 13:14:47.512638       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-108149_14d618d0-8fc9-4f8a-8ef2-8260a4dfd12a!
	W1019 13:14:47.513657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:14:47.527643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 13:14:47.620772       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-108149_14d618d0-8fc9-4f8a-8ef2-8260a4dfd12a!
	W1019 13:14:49.530534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:14:49.537600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:14:51.541324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:14:51.546301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:14:53.553915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:14:53.559187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:14:55.562930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:14:55.567826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:14:57.571673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:14:57.578527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:14:59.582857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:14:59.591371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:15:01.595517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:15:01.601590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-108149 -n no-preload-108149
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-108149 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (8.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-842494 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-842494 --alsologtostderr -v=1: exit status 80 (2.004573887s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-842494 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 13:15:18.912237  483335 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:15:18.912476  483335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:15:18.912509  483335 out.go:374] Setting ErrFile to fd 2...
	I1019 13:15:18.912534  483335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:15:18.912855  483335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:15:18.913155  483335 out.go:368] Setting JSON to false
	I1019 13:15:18.913215  483335 mustload.go:65] Loading cluster: old-k8s-version-842494
	I1019 13:15:18.913627  483335 config.go:182] Loaded profile config "old-k8s-version-842494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 13:15:18.914249  483335 cli_runner.go:164] Run: docker container inspect old-k8s-version-842494 --format={{.State.Status}}
	I1019 13:15:18.931911  483335 host.go:66] Checking if "old-k8s-version-842494" exists ...
	I1019 13:15:18.932282  483335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:15:18.993474  483335 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-19 13:15:18.983316499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:15:18.994246  483335 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-842494 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 13:15:18.999738  483335 out.go:179] * Pausing node old-k8s-version-842494 ... 
	I1019 13:15:19.002928  483335 host.go:66] Checking if "old-k8s-version-842494" exists ...
	I1019 13:15:19.003321  483335 ssh_runner.go:195] Run: systemctl --version
	I1019 13:15:19.003376  483335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-842494
	I1019 13:15:19.022216  483335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/old-k8s-version-842494/id_rsa Username:docker}
	I1019 13:15:19.125445  483335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:15:19.140381  483335 pause.go:52] kubelet running: true
	I1019 13:15:19.140518  483335 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:15:19.361839  483335 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:15:19.361979  483335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:15:19.467180  483335 cri.go:89] found id: "b705a2c9010d53b604a774ef561db9d3e61e0b62bf535ab25415f9195b64ff30"
	I1019 13:15:19.467200  483335 cri.go:89] found id: "cf9bd9d8e3217e21c9a1fc598471a6f3977f811a901df8104c4be0dd2f49a8fd"
	I1019 13:15:19.467205  483335 cri.go:89] found id: "65042321b8cee3cb9ba55a04d613c419d16e93b98467daa77830bad1dab0db52"
	I1019 13:15:19.467209  483335 cri.go:89] found id: "dd47350cd7bf7b6f9e2be9050bc252a57e4193e333974fa6bd6ac582509ea4b3"
	I1019 13:15:19.467212  483335 cri.go:89] found id: "d1b9315af72bf41414a7e6d2ce0d7b027d492620db2491d7a2387dc8a91676c4"
	I1019 13:15:19.467216  483335 cri.go:89] found id: "7b9e97a29ebf3e504604e73866544e7d0fd265d8ac39504373c3597d4796cbae"
	I1019 13:15:19.467219  483335 cri.go:89] found id: "32e954f04c57f4f9b9177fcb833b4861a5da3dff1bf1fdbdbd2c4d4bc0ebf7a3"
	I1019 13:15:19.467222  483335 cri.go:89] found id: "379b611212ba298f43db75b4d6fddb918b70f6a8d89ff799a0a9541dacd968cd"
	I1019 13:15:19.467225  483335 cri.go:89] found id: "78f63059c2fea7e2266edb01a7a8d4ae119845e91cc5ae1b0a044e0c22443f3e"
	I1019 13:15:19.467233  483335 cri.go:89] found id: "294540375ec117ef5624146472fd4938138577d72f86bb7e9d0ed89c55643c62"
	I1019 13:15:19.467237  483335 cri.go:89] found id: "f6ec3fd90a9761e8bdef08c9b10e5ab281f98ed0dcb8b87dd9247f1a32992dbf"
	I1019 13:15:19.467240  483335 cri.go:89] found id: ""
	I1019 13:15:19.467288  483335 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:15:19.479598  483335 retry.go:31] will retry after 127.743318ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:15:19Z" level=error msg="open /run/runc: no such file or directory"
	I1019 13:15:19.607979  483335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:15:19.627773  483335 pause.go:52] kubelet running: false
	I1019 13:15:19.627861  483335 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:15:19.862777  483335 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:15:19.862913  483335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:15:19.954436  483335 cri.go:89] found id: "b705a2c9010d53b604a774ef561db9d3e61e0b62bf535ab25415f9195b64ff30"
	I1019 13:15:19.954457  483335 cri.go:89] found id: "cf9bd9d8e3217e21c9a1fc598471a6f3977f811a901df8104c4be0dd2f49a8fd"
	I1019 13:15:19.954462  483335 cri.go:89] found id: "65042321b8cee3cb9ba55a04d613c419d16e93b98467daa77830bad1dab0db52"
	I1019 13:15:19.954466  483335 cri.go:89] found id: "dd47350cd7bf7b6f9e2be9050bc252a57e4193e333974fa6bd6ac582509ea4b3"
	I1019 13:15:19.954469  483335 cri.go:89] found id: "d1b9315af72bf41414a7e6d2ce0d7b027d492620db2491d7a2387dc8a91676c4"
	I1019 13:15:19.954473  483335 cri.go:89] found id: "7b9e97a29ebf3e504604e73866544e7d0fd265d8ac39504373c3597d4796cbae"
	I1019 13:15:19.954476  483335 cri.go:89] found id: "32e954f04c57f4f9b9177fcb833b4861a5da3dff1bf1fdbdbd2c4d4bc0ebf7a3"
	I1019 13:15:19.954479  483335 cri.go:89] found id: "379b611212ba298f43db75b4d6fddb918b70f6a8d89ff799a0a9541dacd968cd"
	I1019 13:15:19.954481  483335 cri.go:89] found id: "78f63059c2fea7e2266edb01a7a8d4ae119845e91cc5ae1b0a044e0c22443f3e"
	I1019 13:15:19.954487  483335 cri.go:89] found id: "294540375ec117ef5624146472fd4938138577d72f86bb7e9d0ed89c55643c62"
	I1019 13:15:19.954490  483335 cri.go:89] found id: "f6ec3fd90a9761e8bdef08c9b10e5ab281f98ed0dcb8b87dd9247f1a32992dbf"
	I1019 13:15:19.954493  483335 cri.go:89] found id: ""
	I1019 13:15:19.954546  483335 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:15:19.965219  483335 retry.go:31] will retry after 539.631609ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:15:19Z" level=error msg="open /run/runc: no such file or directory"
	I1019 13:15:20.505965  483335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:15:20.522248  483335 pause.go:52] kubelet running: false
	I1019 13:15:20.522318  483335 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:15:20.752192  483335 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:15:20.752286  483335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:15:20.827196  483335 cri.go:89] found id: "b705a2c9010d53b604a774ef561db9d3e61e0b62bf535ab25415f9195b64ff30"
	I1019 13:15:20.827217  483335 cri.go:89] found id: "cf9bd9d8e3217e21c9a1fc598471a6f3977f811a901df8104c4be0dd2f49a8fd"
	I1019 13:15:20.827222  483335 cri.go:89] found id: "65042321b8cee3cb9ba55a04d613c419d16e93b98467daa77830bad1dab0db52"
	I1019 13:15:20.827225  483335 cri.go:89] found id: "dd47350cd7bf7b6f9e2be9050bc252a57e4193e333974fa6bd6ac582509ea4b3"
	I1019 13:15:20.827228  483335 cri.go:89] found id: "d1b9315af72bf41414a7e6d2ce0d7b027d492620db2491d7a2387dc8a91676c4"
	I1019 13:15:20.827232  483335 cri.go:89] found id: "7b9e97a29ebf3e504604e73866544e7d0fd265d8ac39504373c3597d4796cbae"
	I1019 13:15:20.827235  483335 cri.go:89] found id: "32e954f04c57f4f9b9177fcb833b4861a5da3dff1bf1fdbdbd2c4d4bc0ebf7a3"
	I1019 13:15:20.827237  483335 cri.go:89] found id: "379b611212ba298f43db75b4d6fddb918b70f6a8d89ff799a0a9541dacd968cd"
	I1019 13:15:20.827240  483335 cri.go:89] found id: "78f63059c2fea7e2266edb01a7a8d4ae119845e91cc5ae1b0a044e0c22443f3e"
	I1019 13:15:20.827247  483335 cri.go:89] found id: "294540375ec117ef5624146472fd4938138577d72f86bb7e9d0ed89c55643c62"
	I1019 13:15:20.827251  483335 cri.go:89] found id: "f6ec3fd90a9761e8bdef08c9b10e5ab281f98ed0dcb8b87dd9247f1a32992dbf"
	I1019 13:15:20.827253  483335 cri.go:89] found id: ""
	I1019 13:15:20.827301  483335 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:15:20.843589  483335 out.go:203] 
	W1019 13:15:20.846592  483335 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:15:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:15:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 13:15:20.846614  483335 out.go:285] * 
	* 
	W1019 13:15:20.853884  483335 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 13:15:20.857597  483335 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-842494 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-842494
helpers_test.go:243: (dbg) docker inspect old-k8s-version-842494:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd",
	        "Created": "2025-10-19T13:12:36.220963555Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 479047,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T13:14:06.524817423Z",
	            "FinishedAt": "2025-10-19T13:14:03.251305326Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd/hosts",
	        "LogPath": "/var/lib/docker/containers/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd-json.log",
	        "Name": "/old-k8s-version-842494",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-842494:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-842494",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd",
	                "LowerDir": "/var/lib/docker/overlay2/651a449c5b4e1673387a386a93fce51fb6365b65408215e08e645eaad452a977-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/651a449c5b4e1673387a386a93fce51fb6365b65408215e08e645eaad452a977/merged",
	                "UpperDir": "/var/lib/docker/overlay2/651a449c5b4e1673387a386a93fce51fb6365b65408215e08e645eaad452a977/diff",
	                "WorkDir": "/var/lib/docker/overlay2/651a449c5b4e1673387a386a93fce51fb6365b65408215e08e645eaad452a977/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-842494",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-842494/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-842494",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-842494",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-842494",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "24e54f266dfb8d57036aa8c98c086a1df2d17509d534943d687d9da0ce14f6be",
	            "SandboxKey": "/var/run/docker/netns/24e54f266dfb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-842494": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:48:d3:a3:23:75",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b37065579aa71db7f1dc53707ff2b821c589305580e1e4d9a2a0c035d310ed82",
	                    "EndpointID": "ccd0b478ead61d16d5d5173d1f13e4e7baa1e727ada8aa678e73dd1d101fda0e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-842494",
	                        "143af978a0b4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-842494 -n old-k8s-version-842494
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-842494 -n old-k8s-version-842494: exit status 2 (459.182446ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-842494 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-842494 logs -n 25: (1.69366098s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p cert-expiration-088393 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-088393    │ jenkins │ v1.37.0 │ 19 Oct 25 13:09 UTC │ 19 Oct 25 13:10 UTC │
	│ start   │ -p kubernetes-upgrade-104724 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-104724 │ jenkins │ v1.37.0 │ 19 Oct 25 13:10 UTC │                     │
	│ start   │ -p kubernetes-upgrade-104724 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-104724 │ jenkins │ v1.37.0 │ 19 Oct 25 13:10 UTC │ 19 Oct 25 13:11 UTC │
	│ delete  │ -p kubernetes-upgrade-104724                                                                                                                                                                                                                  │ kubernetes-upgrade-104724 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ start   │ -p force-systemd-flag-606072 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-606072 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ force-systemd-flag-606072 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-606072 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ delete  │ -p force-systemd-flag-606072                                                                                                                                                                                                                  │ force-systemd-flag-606072 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ start   │ -p cert-options-264135 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:12 UTC │
	│ ssh     │ cert-options-264135 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ ssh     │ -p cert-options-264135 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ delete  │ -p cert-options-264135                                                                                                                                                                                                                        │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ start   │ -p old-k8s-version-842494 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:13 UTC │
	│ start   │ -p cert-expiration-088393 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-088393    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:13 UTC │
	│ delete  │ -p cert-expiration-088393                                                                                                                                                                                                                     │ cert-expiration-088393    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:13 UTC │
	│ start   │ -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-842494 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │                     │
	│ stop    │ -p old-k8s-version-842494 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-842494 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:14 UTC │ 19 Oct 25 13:14 UTC │
	│ start   │ -p old-k8s-version-842494 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:14 UTC │ 19 Oct 25 13:15 UTC │
	│ addons  │ enable metrics-server -p no-preload-108149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ stop    │ -p no-preload-108149 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ addons  │ enable dashboard -p no-preload-108149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ start   │ -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ image   │ old-k8s-version-842494 image list --format=json                                                                                                                                                                                               │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ pause   │ -p old-k8s-version-842494 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 13:15:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 13:15:15.705609  482757 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:15:15.705795  482757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:15:15.705824  482757 out.go:374] Setting ErrFile to fd 2...
	I1019 13:15:15.705830  482757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:15:15.706153  482757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:15:15.706595  482757 out.go:368] Setting JSON to false
	I1019 13:15:15.707678  482757 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10666,"bootTime":1760869050,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 13:15:15.707835  482757 start.go:141] virtualization:  
	I1019 13:15:15.712883  482757 out.go:179] * [no-preload-108149] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 13:15:15.716110  482757 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:15:15.716189  482757 notify.go:220] Checking for updates...
	I1019 13:15:15.722041  482757 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:15:15.724890  482757 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:15:15.727903  482757 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 13:15:15.730898  482757 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 13:15:15.733913  482757 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:15:15.737362  482757 config.go:182] Loaded profile config "no-preload-108149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:15:15.737945  482757 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:15:15.764508  482757 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 13:15:15.764634  482757 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:15:15.828166  482757 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:15:15.818428042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:15:15.828282  482757 docker.go:318] overlay module found
	I1019 13:15:15.831429  482757 out.go:179] * Using the docker driver based on existing profile
	I1019 13:15:15.834329  482757 start.go:305] selected driver: docker
	I1019 13:15:15.834350  482757 start.go:925] validating driver "docker" against &{Name:no-preload-108149 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-108149 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:15:15.834447  482757 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:15:15.835180  482757 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:15:15.905650  482757 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:15:15.894813131 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:15:15.906086  482757 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:15:15.906125  482757 cni.go:84] Creating CNI manager for ""
	I1019 13:15:15.906194  482757 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:15:15.906231  482757 start.go:349] cluster config:
	{Name:no-preload-108149 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-108149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:15:15.911169  482757 out.go:179] * Starting "no-preload-108149" primary control-plane node in "no-preload-108149" cluster
	I1019 13:15:15.913931  482757 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 13:15:15.916875  482757 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 13:15:15.919701  482757 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:15:15.919778  482757 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 13:15:15.919846  482757 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/config.json ...
	I1019 13:15:15.920271  482757 cache.go:107] acquiring lock: {Name:mk5a8d8c97028719cbe957e1da9da945a08129b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:15.920361  482757 cache.go:115] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1019 13:15:15.920370  482757 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120.035µs
	I1019 13:15:15.920386  482757 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1019 13:15:15.920398  482757 cache.go:107] acquiring lock: {Name:mka319b8201ff42f7c4d5a909d9f20912ffd3c71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:15.920429  482757 cache.go:115] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1019 13:15:15.920435  482757 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 38.622µs
	I1019 13:15:15.920441  482757 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1019 13:15:15.920450  482757 cache.go:107] acquiring lock: {Name:mk88bf9cd976728e53957a14cba132c54a305706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:15.920475  482757 cache.go:115] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1019 13:15:15.920480  482757 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 31.491µs
	I1019 13:15:15.920489  482757 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1019 13:15:15.920498  482757 cache.go:107] acquiring lock: {Name:mkeff018c276f2dc7628871eceb8ffdfd4f5d5dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:15.920524  482757 cache.go:115] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1019 13:15:15.920529  482757 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 31.918µs
	I1019 13:15:15.920535  482757 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1019 13:15:15.920545  482757 cache.go:107] acquiring lock: {Name:mka6ff496b257d0157aa179323c77a165d878290 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:15.920570  482757 cache.go:115] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1019 13:15:15.920576  482757 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 32.477µs
	I1019 13:15:15.920582  482757 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1019 13:15:15.920590  482757 cache.go:107] acquiring lock: {Name:mk900cbbfae137b259a6d045a5e954905ebc4ab7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:15.920616  482757 cache.go:115] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1019 13:15:15.920621  482757 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.549µs
	I1019 13:15:15.920636  482757 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1019 13:15:15.920645  482757 cache.go:107] acquiring lock: {Name:mk8151f8aaf53ecf9ac26af60dbb866094ee01c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:15.920670  482757 cache.go:115] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1019 13:15:15.920675  482757 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 31.188µs
	I1019 13:15:15.920681  482757 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1019 13:15:15.920688  482757 cache.go:107] acquiring lock: {Name:mk588b1a76127636b20b5749ab1b86e294b230e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:15.920713  482757 cache.go:115] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1019 13:15:15.920718  482757 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 30.532µs
	I1019 13:15:15.920723  482757 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1019 13:15:15.920729  482757 cache.go:87] Successfully saved all images to host disk.
	I1019 13:15:15.939578  482757 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 13:15:15.939601  482757 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 13:15:15.939619  482757 cache.go:232] Successfully downloaded all kic artifacts
	I1019 13:15:15.939650  482757 start.go:360] acquireMachinesLock for no-preload-108149: {Name:mk1e7d61a5a88a341b3d8e7634b6c23c2df5dac5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:15.939705  482757 start.go:364] duration metric: took 37.334µs to acquireMachinesLock for "no-preload-108149"
	I1019 13:15:15.939730  482757 start.go:96] Skipping create...Using existing machine configuration
	I1019 13:15:15.939736  482757 fix.go:54] fixHost starting: 
	I1019 13:15:15.939982  482757 cli_runner.go:164] Run: docker container inspect no-preload-108149 --format={{.State.Status}}
	I1019 13:15:15.957187  482757 fix.go:112] recreateIfNeeded on no-preload-108149: state=Stopped err=<nil>
	W1019 13:15:15.957218  482757 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 13:15:15.962343  482757 out.go:252] * Restarting existing docker container for "no-preload-108149" ...
	I1019 13:15:15.962440  482757 cli_runner.go:164] Run: docker start no-preload-108149
	I1019 13:15:16.269415  482757 cli_runner.go:164] Run: docker container inspect no-preload-108149 --format={{.State.Status}}
	I1019 13:15:16.289992  482757 kic.go:430] container "no-preload-108149" state is running.
	I1019 13:15:16.290440  482757 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-108149
	I1019 13:15:16.313699  482757 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/config.json ...
	I1019 13:15:16.313939  482757 machine.go:93] provisionDockerMachine start ...
	I1019 13:15:16.314002  482757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:15:16.334551  482757 main.go:141] libmachine: Using SSH client type: native
	I1019 13:15:16.334880  482757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1019 13:15:16.334889  482757 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 13:15:16.335457  482757 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45616->127.0.0.1:33433: read: connection reset by peer
	I1019 13:15:19.502178  482757 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-108149
	
	I1019 13:15:19.502215  482757 ubuntu.go:182] provisioning hostname "no-preload-108149"
	I1019 13:15:19.502285  482757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:15:19.519798  482757 main.go:141] libmachine: Using SSH client type: native
	I1019 13:15:19.520130  482757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1019 13:15:19.520149  482757 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-108149 && echo "no-preload-108149" | sudo tee /etc/hostname
	I1019 13:15:19.695580  482757 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-108149
	
	I1019 13:15:19.695666  482757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:15:19.729943  482757 main.go:141] libmachine: Using SSH client type: native
	I1019 13:15:19.730274  482757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1019 13:15:19.730298  482757 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-108149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-108149/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-108149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 13:15:19.889949  482757 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 13:15:19.889989  482757 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 13:15:19.890042  482757 ubuntu.go:190] setting up certificates
	I1019 13:15:19.890053  482757 provision.go:84] configureAuth start
	I1019 13:15:19.890137  482757 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-108149
	I1019 13:15:19.915806  482757 provision.go:143] copyHostCerts
	I1019 13:15:19.915874  482757 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem, removing ...
	I1019 13:15:19.915903  482757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem
	I1019 13:15:19.915980  482757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 13:15:19.916091  482757 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem, removing ...
	I1019 13:15:19.916105  482757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem
	I1019 13:15:19.916135  482757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 13:15:19.916201  482757 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem, removing ...
	I1019 13:15:19.916211  482757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem
	I1019 13:15:19.916236  482757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 13:15:19.916291  482757 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.no-preload-108149 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-108149]
	I1019 13:15:20.113819  482757 provision.go:177] copyRemoteCerts
	I1019 13:15:20.113917  482757 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 13:15:20.113978  482757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:15:20.132240  482757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/no-preload-108149/id_rsa Username:docker}
	I1019 13:15:20.237661  482757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 13:15:20.261364  482757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 13:15:20.280018  482757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 13:15:20.297601  482757 provision.go:87] duration metric: took 407.529478ms to configureAuth
	I1019 13:15:20.297626  482757 ubuntu.go:206] setting minikube options for container-runtime
	I1019 13:15:20.297889  482757 config.go:182] Loaded profile config "no-preload-108149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:15:20.298003  482757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:15:20.315415  482757 main.go:141] libmachine: Using SSH client type: native
	I1019 13:15:20.315747  482757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1019 13:15:20.315766  482757 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 13:15:20.681357  482757 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 13:15:20.681394  482757 machine.go:96] duration metric: took 4.36744593s to provisionDockerMachine
	I1019 13:15:20.681405  482757 start.go:293] postStartSetup for "no-preload-108149" (driver="docker")
	I1019 13:15:20.681416  482757 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 13:15:20.681496  482757 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 13:15:20.681545  482757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	
	
	==> CRI-O <==
	Oct 19 13:14:59 old-k8s-version-842494 crio[650]: time="2025-10-19T13:14:59.111761956Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:14:59 old-k8s-version-842494 crio[650]: time="2025-10-19T13:14:59.118398796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:14:59 old-k8s-version-842494 crio[650]: time="2025-10-19T13:14:59.119090957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:14:59 old-k8s-version-842494 crio[650]: time="2025-10-19T13:14:59.134656236Z" level=info msg="Created container 294540375ec117ef5624146472fd4938138577d72f86bb7e9d0ed89c55643c62: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd/dashboard-metrics-scraper" id=5ea97f28-1261-447c-a173-39c8fbc0fd6b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:14:59 old-k8s-version-842494 crio[650]: time="2025-10-19T13:14:59.135704332Z" level=info msg="Starting container: 294540375ec117ef5624146472fd4938138577d72f86bb7e9d0ed89c55643c62" id=678f1915-8a5b-4421-b438-6994cc5082cc name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:14:59 old-k8s-version-842494 crio[650]: time="2025-10-19T13:14:59.138266089Z" level=info msg="Started container" PID=1635 containerID=294540375ec117ef5624146472fd4938138577d72f86bb7e9d0ed89c55643c62 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd/dashboard-metrics-scraper id=678f1915-8a5b-4421-b438-6994cc5082cc name=/runtime.v1.RuntimeService/StartContainer sandboxID=a5dd2db60952834521d59f2bdb0bbf53ee403f34efc5727664fd18bb040e1345
	Oct 19 13:14:59 old-k8s-version-842494 conmon[1633]: conmon 294540375ec117ef5624 <ninfo>: container 1635 exited with status 1
	Oct 19 13:14:59 old-k8s-version-842494 crio[650]: time="2025-10-19T13:14:59.428836682Z" level=info msg="Removing container: 6c2ddd31210ff568d3b7e73927ba5c877b421bc1e73fc15e2ff547564d00766d" id=747a5833-c2f2-4e3a-bffb-16a565f8ce30 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:14:59 old-k8s-version-842494 crio[650]: time="2025-10-19T13:14:59.440458491Z" level=info msg="Error loading conmon cgroup of container 6c2ddd31210ff568d3b7e73927ba5c877b421bc1e73fc15e2ff547564d00766d: cgroup deleted" id=747a5833-c2f2-4e3a-bffb-16a565f8ce30 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:14:59 old-k8s-version-842494 crio[650]: time="2025-10-19T13:14:59.445426787Z" level=info msg="Removed container 6c2ddd31210ff568d3b7e73927ba5c877b421bc1e73fc15e2ff547564d00766d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd/dashboard-metrics-scraper" id=747a5833-c2f2-4e3a-bffb-16a565f8ce30 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.451678859Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.455799915Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.455835879Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.455858033Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.459085424Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.459124358Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.45914867Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.462194372Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.462230344Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.46225313Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.465502109Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.46553621Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.465562844Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.468873633Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.468907791Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	294540375ec11       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   a5dd2db609528       dashboard-metrics-scraper-5f989dc9cf-f58zd       kubernetes-dashboard
	b705a2c9010d5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   edd1ecf7569c7       storage-provisioner                              kube-system
	f6ec3fd90a976       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago       Running             kubernetes-dashboard        0                   0d683f26491a9       kubernetes-dashboard-8694d4445c-7m5tv            kubernetes-dashboard
	cf9bd9d8e3217       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           56 seconds ago       Running             coredns                     1                   5ef79b67b0965       coredns-5dd5756b68-5mdz7                         kube-system
	f27de09649c7d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   457988159c83c       busybox                                          default
	65042321b8cee       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           56 seconds ago       Running             kube-proxy                  1                   fc3d94733f27d       kube-proxy-v7wq7                                 kube-system
	dd47350cd7bf7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   edd1ecf7569c7       storage-provisioner                              kube-system
	d1b9315af72bf       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   5f02c467b1212       kindnet-7lwtw                                    kube-system
	7b9e97a29ebf3       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   b3bc62cf3225b       kube-controller-manager-old-k8s-version-842494   kube-system
	32e954f04c57f       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   825babba820ef       etcd-old-k8s-version-842494                      kube-system
	379b611212ba2       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   9c806d7c89897       kube-scheduler-old-k8s-version-842494            kube-system
	78f63059c2fea       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   2591a3c5640bb       kube-apiserver-old-k8s-version-842494            kube-system
	
	
	==> coredns [cf9bd9d8e3217e21c9a1fc598471a6f3977f811a901df8104c4be0dd2f49a8fd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57361 - 42043 "HINFO IN 8580207794113153234.9172770925368522785. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018591034s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-842494
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-842494
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=old-k8s-version-842494
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T13_13_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 13:13:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-842494
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 13:15:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 13:15:15 +0000   Sun, 19 Oct 2025 13:12:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 13:15:15 +0000   Sun, 19 Oct 2025 13:12:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 13:15:15 +0000   Sun, 19 Oct 2025 13:12:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 13:15:15 +0000   Sun, 19 Oct 2025 13:13:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-842494
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                ff91876e-8bed-4e46-9175-4f587101f24f
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 coredns-5dd5756b68-5mdz7                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m4s
	  kube-system                 etcd-old-k8s-version-842494                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m17s
	  kube-system                 kindnet-7lwtw                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m4s
	  kube-system                 kube-apiserver-old-k8s-version-842494             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-controller-manager-old-k8s-version-842494    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-proxy-v7wq7                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-scheduler-old-k8s-version-842494             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-f58zd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-7m5tv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m3s                   kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m26s (x8 over 2m26s)  kubelet          Node old-k8s-version-842494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m26s (x8 over 2m26s)  kubelet          Node old-k8s-version-842494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m26s (x8 over 2m26s)  kubelet          Node old-k8s-version-842494 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m17s                  kubelet          Node old-k8s-version-842494 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m17s                  kubelet          Node old-k8s-version-842494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m17s                  kubelet          Node old-k8s-version-842494 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           2m5s                   node-controller  Node old-k8s-version-842494 event: Registered Node old-k8s-version-842494 in Controller
	  Normal  NodeReady                109s                   kubelet          Node old-k8s-version-842494 status is now: NodeReady
	  Normal  Starting                 69s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s (x8 over 68s)      kubelet          Node old-k8s-version-842494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s (x8 over 68s)      kubelet          Node old-k8s-version-842494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s (x8 over 68s)      kubelet          Node old-k8s-version-842494 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                    node-controller  Node old-k8s-version-842494 event: Registered Node old-k8s-version-842494 in Controller
	
	
	==> dmesg <==
	[Oct19 12:50] overlayfs: idmapped layers are currently not supported
	[Oct19 12:51] overlayfs: idmapped layers are currently not supported
	[Oct19 12:52] overlayfs: idmapped layers are currently not supported
	[Oct19 12:53] overlayfs: idmapped layers are currently not supported
	[Oct19 12:54] overlayfs: idmapped layers are currently not supported
	[Oct19 12:56] overlayfs: idmapped layers are currently not supported
	[ +16.315179] overlayfs: idmapped layers are currently not supported
	[ +11.914063] overlayfs: idmapped layers are currently not supported
	[Oct19 12:57] overlayfs: idmapped layers are currently not supported
	[Oct19 12:58] overlayfs: idmapped layers are currently not supported
	[ +48.481184] overlayfs: idmapped layers are currently not supported
	[Oct19 12:59] overlayfs: idmapped layers are currently not supported
	[Oct19 13:00] overlayfs: idmapped layers are currently not supported
	[Oct19 13:01] overlayfs: idmapped layers are currently not supported
	[Oct19 13:04] overlayfs: idmapped layers are currently not supported
	[Oct19 13:05] overlayfs: idmapped layers are currently not supported
	[Oct19 13:06] overlayfs: idmapped layers are currently not supported
	[Oct19 13:08] overlayfs: idmapped layers are currently not supported
	[ +38.759554] overlayfs: idmapped layers are currently not supported
	[Oct19 13:10] overlayfs: idmapped layers are currently not supported
	[Oct19 13:11] overlayfs: idmapped layers are currently not supported
	[Oct19 13:12] overlayfs: idmapped layers are currently not supported
	[ +39.991818] overlayfs: idmapped layers are currently not supported
	[Oct19 13:13] overlayfs: idmapped layers are currently not supported
	[Oct19 13:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [32e954f04c57f4f9b9177fcb833b4861a5da3dff1bf1fdbdbd2c4d4bc0ebf7a3] <==
	{"level":"info","ts":"2025-10-19T13:14:16.025486Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-19T13:14:16.025569Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T13:14:16.025595Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T13:14:16.027705Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T13:14:16.027746Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T13:14:16.027754Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T13:14:16.081564Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-19T13:14:16.10383Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-19T13:14:16.094943Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-19T13:14:16.105169Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-19T13:14:16.122038Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-19T13:14:16.971832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-19T13:14:16.97196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-19T13:14:16.97201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-19T13:14:16.972051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-19T13:14:16.972084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-19T13:14:16.972122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-19T13:14:16.972163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-19T13:14:16.974486Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-842494 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-19T13:14:16.974704Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T13:14:16.976522Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-19T13:14:16.98175Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T13:14:16.982776Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-19T13:14:17.005735Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-19T13:14:17.005843Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:15:22 up  2:57,  0 user,  load average: 2.70, 2.86, 2.59
	Linux old-k8s-version-842494 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d1b9315af72bf41414a7e6d2ce0d7b027d492620db2491d7a2387dc8a91676c4] <==
	I1019 13:14:26.222268       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:14:26.225842       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 13:14:26.226121       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:14:26.226135       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:14:26.226149       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:14:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:14:26.449607       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:14:26.449625       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:14:26.449633       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:14:26.450545       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 13:14:56.451730       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 13:14:56.451894       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 13:14:56.451982       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1019 13:14:56.452066       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1019 13:14:57.950398       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 13:14:57.950499       1 metrics.go:72] Registering metrics
	I1019 13:14:57.950577       1 controller.go:711] "Syncing nftables rules"
	I1019 13:15:06.449497       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 13:15:06.450899       1 main.go:301] handling current node
	I1019 13:15:16.449238       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 13:15:16.449374       1 main.go:301] handling current node
	
	
	==> kube-apiserver [78f63059c2fea7e2266edb01a7a8d4ae119845e91cc5ae1b0a044e0c22443f3e] <==
	I1019 13:14:24.374645       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1019 13:14:24.375655       1 aggregator.go:166] initial CRD sync complete...
	I1019 13:14:24.375997       1 autoregister_controller.go:141] Starting autoregister controller
	I1019 13:14:24.376051       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 13:14:24.449147       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 13:14:24.473067       1 trace.go:236] Trace[1639828164]: "DeltaFIFO Pop Process" ID:v1.admissionregistration.k8s.io,Depth:19,Reason:slow event handlers blocking the queue (19-Oct-2025 13:14:24.364) (total time: 108ms):
	Trace[1639828164]: [108.211658ms] [108.211658ms] END
	I1019 13:14:24.474117       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1019 13:14:24.474143       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1019 13:14:24.485869       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 13:14:24.546324       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1019 13:14:24.578635       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1019 13:14:24.585585       1 cache.go:39] Caches are synced for autoregister controller
	E1019 13:14:24.669294       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 13:14:24.878933       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 13:14:27.830740       1 controller.go:624] quota admission added evaluator for: namespaces
	I1019 13:14:27.885783       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1019 13:14:27.915882       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 13:14:27.942888       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 13:14:27.961031       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1019 13:14:28.030324       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.248.70"}
	I1019 13:14:28.051699       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.98.161"}
	I1019 13:14:37.572289       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 13:14:37.581962       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1019 13:14:37.614552       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [7b9e97a29ebf3e504604e73866544e7d0fd265d8ac39504373c3597d4796cbae] <==
	I1019 13:14:37.647696       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-f58zd"
	I1019 13:14:37.648150       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-7m5tv"
	I1019 13:14:37.667096       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.279098ms"
	I1019 13:14:37.672046       1 shared_informer.go:318] Caches are synced for resource quota
	I1019 13:14:37.672226       1 shared_informer.go:318] Caches are synced for resource quota
	I1019 13:14:37.694832       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="90.676912ms"
	I1019 13:14:37.701850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="34.66996ms"
	I1019 13:14:37.702030       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.884µs"
	I1019 13:14:37.702155       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.998µs"
	I1019 13:14:37.712376       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.476664ms"
	I1019 13:14:37.712567       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="78.023µs"
	I1019 13:14:37.722447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.434µs"
	I1019 13:14:37.738177       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="56.206µs"
	I1019 13:14:38.029989       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 13:14:38.052562       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 13:14:38.052608       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1019 13:14:42.381273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.561µs"
	I1019 13:14:43.405658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.475µs"
	I1019 13:14:44.407138       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.392µs"
	I1019 13:14:48.434575       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.291343ms"
	I1019 13:14:48.434703       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="86.097µs"
	I1019 13:14:59.448120       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="48.173µs"
	I1019 13:15:05.632513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.584257ms"
	I1019 13:15:05.633034       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.519µs"
	I1019 13:15:07.991909       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.636µs"
	
	
	==> kube-proxy [65042321b8cee3cb9ba55a04d613c419d16e93b98467daa77830bad1dab0db52] <==
	I1019 13:14:26.694298       1 server_others.go:69] "Using iptables proxy"
	I1019 13:14:26.796308       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1019 13:14:26.950998       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:14:26.958407       1 server_others.go:152] "Using iptables Proxier"
	I1019 13:14:26.958444       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1019 13:14:26.958452       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1019 13:14:26.958480       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1019 13:14:26.958690       1 server.go:846] "Version info" version="v1.28.0"
	I1019 13:14:26.958700       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:14:26.973271       1 config.go:188] "Starting service config controller"
	I1019 13:14:26.973304       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1019 13:14:26.973336       1 config.go:97] "Starting endpoint slice config controller"
	I1019 13:14:26.973340       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1019 13:14:26.976345       1 config.go:315] "Starting node config controller"
	I1019 13:14:26.976385       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1019 13:14:27.073915       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1019 13:14:27.073971       1 shared_informer.go:318] Caches are synced for service config
	I1019 13:14:27.077304       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [379b611212ba298f43db75b4d6fddb918b70f6a8d89ff799a0a9541dacd968cd] <==
	W1019 13:14:24.047361       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 13:14:24.047388       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 13:14:24.047398       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 13:14:24.047404       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 13:14:24.326207       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1019 13:14:24.333969       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:14:24.338566       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1019 13:14:24.341917       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:14:24.348425       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1019 13:14:24.341956       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1019 13:14:24.374224       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1019 13:14:24.374273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1019 13:14:24.374364       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1019 13:14:24.374380       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1019 13:14:24.374473       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1019 13:14:24.374484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1019 13:14:24.374718       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1019 13:14:24.374742       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1019 13:14:24.378292       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1019 13:14:24.378322       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1019 13:14:24.378460       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1019 13:14:24.378477       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1019 13:14:24.378538       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1019 13:14:24.378550       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1019 13:14:24.454119       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 19 13:14:37 old-k8s-version-842494 kubelet[777]: I1019 13:14:37.785875     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/230d520c-cc44-4ec0-b4e5-9535bb6640cd-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-f58zd\" (UID: \"230d520c-cc44-4ec0-b4e5-9535bb6640cd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd"
	Oct 19 13:14:37 old-k8s-version-842494 kubelet[777]: I1019 13:14:37.786218     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9753fd7f-7e7b-4446-adf9-ab41cecf44d6-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-7m5tv\" (UID: \"9753fd7f-7e7b-4446-adf9-ab41cecf44d6\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7m5tv"
	Oct 19 13:14:37 old-k8s-version-842494 kubelet[777]: I1019 13:14:37.786270     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k972x\" (UniqueName: \"kubernetes.io/projected/9753fd7f-7e7b-4446-adf9-ab41cecf44d6-kube-api-access-k972x\") pod \"kubernetes-dashboard-8694d4445c-7m5tv\" (UID: \"9753fd7f-7e7b-4446-adf9-ab41cecf44d6\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7m5tv"
	Oct 19 13:14:37 old-k8s-version-842494 kubelet[777]: I1019 13:14:37.786305     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxpjg\" (UniqueName: \"kubernetes.io/projected/230d520c-cc44-4ec0-b4e5-9535bb6640cd-kube-api-access-pxpjg\") pod \"dashboard-metrics-scraper-5f989dc9cf-f58zd\" (UID: \"230d520c-cc44-4ec0-b4e5-9535bb6640cd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd"
	Oct 19 13:14:38 old-k8s-version-842494 kubelet[777]: W1019 13:14:38.018272     777 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd/crio-0d683f26491a92ecc6b95359c7dcf722cb9037b06d5c4ea8999a3b6eecc1e104 WatchSource:0}: Error finding container 0d683f26491a92ecc6b95359c7dcf722cb9037b06d5c4ea8999a3b6eecc1e104: Status 404 returned error can't find the container with id 0d683f26491a92ecc6b95359c7dcf722cb9037b06d5c4ea8999a3b6eecc1e104
	Oct 19 13:14:42 old-k8s-version-842494 kubelet[777]: I1019 13:14:42.367695     777 scope.go:117] "RemoveContainer" containerID="28a068e846942cfda600e70a04682a52b15be800ada906289239b96cfbf9f168"
	Oct 19 13:14:43 old-k8s-version-842494 kubelet[777]: I1019 13:14:43.376306     777 scope.go:117] "RemoveContainer" containerID="6c2ddd31210ff568d3b7e73927ba5c877b421bc1e73fc15e2ff547564d00766d"
	Oct 19 13:14:43 old-k8s-version-842494 kubelet[777]: I1019 13:14:43.377374     777 scope.go:117] "RemoveContainer" containerID="28a068e846942cfda600e70a04682a52b15be800ada906289239b96cfbf9f168"
	Oct 19 13:14:43 old-k8s-version-842494 kubelet[777]: E1019 13:14:43.377427     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f58zd_kubernetes-dashboard(230d520c-cc44-4ec0-b4e5-9535bb6640cd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd" podUID="230d520c-cc44-4ec0-b4e5-9535bb6640cd"
	Oct 19 13:14:44 old-k8s-version-842494 kubelet[777]: I1019 13:14:44.384803     777 scope.go:117] "RemoveContainer" containerID="6c2ddd31210ff568d3b7e73927ba5c877b421bc1e73fc15e2ff547564d00766d"
	Oct 19 13:14:44 old-k8s-version-842494 kubelet[777]: E1019 13:14:44.385078     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f58zd_kubernetes-dashboard(230d520c-cc44-4ec0-b4e5-9535bb6640cd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd" podUID="230d520c-cc44-4ec0-b4e5-9535bb6640cd"
	Oct 19 13:14:47 old-k8s-version-842494 kubelet[777]: I1019 13:14:47.978080     777 scope.go:117] "RemoveContainer" containerID="6c2ddd31210ff568d3b7e73927ba5c877b421bc1e73fc15e2ff547564d00766d"
	Oct 19 13:14:47 old-k8s-version-842494 kubelet[777]: E1019 13:14:47.978384     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f58zd_kubernetes-dashboard(230d520c-cc44-4ec0-b4e5-9535bb6640cd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd" podUID="230d520c-cc44-4ec0-b4e5-9535bb6640cd"
	Oct 19 13:14:56 old-k8s-version-842494 kubelet[777]: I1019 13:14:56.416210     777 scope.go:117] "RemoveContainer" containerID="dd47350cd7bf7b6f9e2be9050bc252a57e4193e333974fa6bd6ac582509ea4b3"
	Oct 19 13:14:56 old-k8s-version-842494 kubelet[777]: I1019 13:14:56.446334     777 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7m5tv" podStartSLOduration=9.657857245 podCreationTimestamp="2025-10-19 13:14:37 +0000 UTC" firstStartedPulling="2025-10-19 13:14:38.022499167 +0000 UTC m=+24.270004137" lastFinishedPulling="2025-10-19 13:14:47.81014868 +0000 UTC m=+34.057653658" observedRunningTime="2025-10-19 13:14:48.413344657 +0000 UTC m=+34.660849635" watchObservedRunningTime="2025-10-19 13:14:56.445506766 +0000 UTC m=+42.693011744"
	Oct 19 13:14:59 old-k8s-version-842494 kubelet[777]: I1019 13:14:59.108488     777 scope.go:117] "RemoveContainer" containerID="6c2ddd31210ff568d3b7e73927ba5c877b421bc1e73fc15e2ff547564d00766d"
	Oct 19 13:14:59 old-k8s-version-842494 kubelet[777]: I1019 13:14:59.426849     777 scope.go:117] "RemoveContainer" containerID="6c2ddd31210ff568d3b7e73927ba5c877b421bc1e73fc15e2ff547564d00766d"
	Oct 19 13:14:59 old-k8s-version-842494 kubelet[777]: I1019 13:14:59.427120     777 scope.go:117] "RemoveContainer" containerID="294540375ec117ef5624146472fd4938138577d72f86bb7e9d0ed89c55643c62"
	Oct 19 13:14:59 old-k8s-version-842494 kubelet[777]: E1019 13:14:59.427386     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f58zd_kubernetes-dashboard(230d520c-cc44-4ec0-b4e5-9535bb6640cd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd" podUID="230d520c-cc44-4ec0-b4e5-9535bb6640cd"
	Oct 19 13:15:07 old-k8s-version-842494 kubelet[777]: I1019 13:15:07.976904     777 scope.go:117] "RemoveContainer" containerID="294540375ec117ef5624146472fd4938138577d72f86bb7e9d0ed89c55643c62"
	Oct 19 13:15:07 old-k8s-version-842494 kubelet[777]: E1019 13:15:07.977228     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f58zd_kubernetes-dashboard(230d520c-cc44-4ec0-b4e5-9535bb6640cd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd" podUID="230d520c-cc44-4ec0-b4e5-9535bb6640cd"
	Oct 19 13:15:19 old-k8s-version-842494 kubelet[777]: I1019 13:15:19.302657     777 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 19 13:15:19 old-k8s-version-842494 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 13:15:19 old-k8s-version-842494 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 13:15:19 old-k8s-version-842494 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f6ec3fd90a9761e8bdef08c9b10e5ab281f98ed0dcb8b87dd9247f1a32992dbf] <==
	2025/10/19 13:14:47 Using namespace: kubernetes-dashboard
	2025/10/19 13:14:47 Using in-cluster config to connect to apiserver
	2025/10/19 13:14:47 Using secret token for csrf signing
	2025/10/19 13:14:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 13:14:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 13:14:47 Successful initial request to the apiserver, version: v1.28.0
	2025/10/19 13:14:47 Generating JWE encryption key
	2025/10/19 13:14:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 13:14:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 13:14:48 Initializing JWE encryption key from synchronized object
	2025/10/19 13:14:48 Creating in-cluster Sidecar client
	2025/10/19 13:14:48 Serving insecurely on HTTP port: 9090
	2025/10/19 13:14:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 13:15:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 13:14:47 Starting overwatch
	
	
	==> storage-provisioner [b705a2c9010d53b604a774ef561db9d3e61e0b62bf535ab25415f9195b64ff30] <==
	I1019 13:14:56.461270       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 13:14:56.476700       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 13:14:56.476746       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1019 13:15:13.874432       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 13:15:13.874719       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-842494_94251d4c-2e21-4f49-9370-acba525d5bab!
	I1019 13:15:13.877103       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba2428d6-3741-40ad-80da-985be3fb4b28", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-842494_94251d4c-2e21-4f49-9370-acba525d5bab became leader
	I1019 13:15:13.974878       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-842494_94251d4c-2e21-4f49-9370-acba525d5bab!
	
	
	==> storage-provisioner [dd47350cd7bf7b6f9e2be9050bc252a57e4193e333974fa6bd6ac582509ea4b3] <==
	I1019 13:14:26.213425       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 13:14:56.224154       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-842494 -n old-k8s-version-842494
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-842494 -n old-k8s-version-842494: exit status 2 (500.975985ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-842494 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-842494
helpers_test.go:243: (dbg) docker inspect old-k8s-version-842494:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd",
	        "Created": "2025-10-19T13:12:36.220963555Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 479047,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T13:14:06.524817423Z",
	            "FinishedAt": "2025-10-19T13:14:03.251305326Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd/hosts",
	        "LogPath": "/var/lib/docker/containers/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd-json.log",
	        "Name": "/old-k8s-version-842494",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-842494:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-842494",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd",
	                "LowerDir": "/var/lib/docker/overlay2/651a449c5b4e1673387a386a93fce51fb6365b65408215e08e645eaad452a977-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/651a449c5b4e1673387a386a93fce51fb6365b65408215e08e645eaad452a977/merged",
	                "UpperDir": "/var/lib/docker/overlay2/651a449c5b4e1673387a386a93fce51fb6365b65408215e08e645eaad452a977/diff",
	                "WorkDir": "/var/lib/docker/overlay2/651a449c5b4e1673387a386a93fce51fb6365b65408215e08e645eaad452a977/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-842494",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-842494/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-842494",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-842494",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-842494",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "24e54f266dfb8d57036aa8c98c086a1df2d17509d534943d687d9da0ce14f6be",
	            "SandboxKey": "/var/run/docker/netns/24e54f266dfb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-842494": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:48:d3:a3:23:75",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b37065579aa71db7f1dc53707ff2b821c589305580e1e4d9a2a0c035d310ed82",
	                    "EndpointID": "ccd0b478ead61d16d5d5173d1f13e4e7baa1e727ada8aa678e73dd1d101fda0e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-842494",
	                        "143af978a0b4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-842494 -n old-k8s-version-842494
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-842494 -n old-k8s-version-842494: exit status 2 (484.960057ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-842494 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-842494 logs -n 25: (2.09592323s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p cert-expiration-088393 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-088393    │ jenkins │ v1.37.0 │ 19 Oct 25 13:09 UTC │ 19 Oct 25 13:10 UTC │
	│ start   │ -p kubernetes-upgrade-104724 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-104724 │ jenkins │ v1.37.0 │ 19 Oct 25 13:10 UTC │                     │
	│ start   │ -p kubernetes-upgrade-104724 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-104724 │ jenkins │ v1.37.0 │ 19 Oct 25 13:10 UTC │ 19 Oct 25 13:11 UTC │
	│ delete  │ -p kubernetes-upgrade-104724                                                                                                                                                                                                                  │ kubernetes-upgrade-104724 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ start   │ -p force-systemd-flag-606072 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-606072 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ ssh     │ force-systemd-flag-606072 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-606072 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ delete  │ -p force-systemd-flag-606072                                                                                                                                                                                                                  │ force-systemd-flag-606072 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ start   │ -p cert-options-264135 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:12 UTC │
	│ ssh     │ cert-options-264135 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ ssh     │ -p cert-options-264135 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ delete  │ -p cert-options-264135                                                                                                                                                                                                                        │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ start   │ -p old-k8s-version-842494 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:13 UTC │
	│ start   │ -p cert-expiration-088393 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-088393    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:13 UTC │
	│ delete  │ -p cert-expiration-088393                                                                                                                                                                                                                     │ cert-expiration-088393    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:13 UTC │
	│ start   │ -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-842494 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │                     │
	│ stop    │ -p old-k8s-version-842494 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-842494 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:14 UTC │ 19 Oct 25 13:14 UTC │
	│ start   │ -p old-k8s-version-842494 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:14 UTC │ 19 Oct 25 13:15 UTC │
	│ addons  │ enable metrics-server -p no-preload-108149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ stop    │ -p no-preload-108149 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ addons  │ enable dashboard -p no-preload-108149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ start   │ -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ image   │ old-k8s-version-842494 image list --format=json                                                                                                                                                                                               │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ pause   │ -p old-k8s-version-842494 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 13:15:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 13:15:15.705609  482757 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:15:15.705795  482757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:15:15.705824  482757 out.go:374] Setting ErrFile to fd 2...
	I1019 13:15:15.705830  482757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:15:15.706153  482757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:15:15.706595  482757 out.go:368] Setting JSON to false
	I1019 13:15:15.707678  482757 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10666,"bootTime":1760869050,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 13:15:15.707835  482757 start.go:141] virtualization:  
	I1019 13:15:15.712883  482757 out.go:179] * [no-preload-108149] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 13:15:15.716110  482757 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:15:15.716189  482757 notify.go:220] Checking for updates...
	I1019 13:15:15.722041  482757 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:15:15.724890  482757 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:15:15.727903  482757 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 13:15:15.730898  482757 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 13:15:15.733913  482757 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:15:15.737362  482757 config.go:182] Loaded profile config "no-preload-108149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:15:15.737945  482757 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:15:15.764508  482757 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 13:15:15.764634  482757 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:15:15.828166  482757 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:15:15.818428042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:15:15.828282  482757 docker.go:318] overlay module found
	I1019 13:15:15.831429  482757 out.go:179] * Using the docker driver based on existing profile
	I1019 13:15:15.834329  482757 start.go:305] selected driver: docker
	I1019 13:15:15.834350  482757 start.go:925] validating driver "docker" against &{Name:no-preload-108149 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-108149 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:15:15.834447  482757 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:15:15.835180  482757 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:15:15.905650  482757 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:15:15.894813131 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:15:15.906086  482757 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:15:15.906125  482757 cni.go:84] Creating CNI manager for ""
	I1019 13:15:15.906194  482757 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:15:15.906231  482757 start.go:349] cluster config:
	{Name:no-preload-108149 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-108149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:15:15.911169  482757 out.go:179] * Starting "no-preload-108149" primary control-plane node in "no-preload-108149" cluster
	I1019 13:15:15.913931  482757 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 13:15:15.916875  482757 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 13:15:15.919701  482757 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:15:15.919778  482757 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 13:15:15.919846  482757 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/config.json ...
	I1019 13:15:15.920271  482757 cache.go:107] acquiring lock: {Name:mk5a8d8c97028719cbe957e1da9da945a08129b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:15.920361  482757 cache.go:115] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1019 13:15:15.920370  482757 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120.035µs
	I1019 13:15:15.920386  482757 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1019 13:15:15.920398  482757 cache.go:107] acquiring lock: {Name:mka319b8201ff42f7c4d5a909d9f20912ffd3c71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:15.920429  482757 cache.go:115] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1019 13:15:15.920435  482757 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 38.622µs
	I1019 13:15:15.920441  482757 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1019 13:15:15.920450  482757 cache.go:107] acquiring lock: {Name:mk88bf9cd976728e53957a14cba132c54a305706 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:15.920475  482757 cache.go:115] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1019 13:15:15.920480  482757 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 31.491µs
	I1019 13:15:15.920489  482757 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1019 13:15:15.920498  482757 cache.go:107] acquiring lock: {Name:mkeff018c276f2dc7628871eceb8ffdfd4f5d5dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:15.920524  482757 cache.go:115] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1019 13:15:15.920529  482757 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 31.918µs
	I1019 13:15:15.920535  482757 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1019 13:15:15.920545  482757 cache.go:107] acquiring lock: {Name:mka6ff496b257d0157aa179323c77a165d878290 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:15.920570  482757 cache.go:115] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1019 13:15:15.920576  482757 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 32.477µs
	I1019 13:15:15.920582  482757 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1019 13:15:15.920590  482757 cache.go:107] acquiring lock: {Name:mk900cbbfae137b259a6d045a5e954905ebc4ab7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:15.920616  482757 cache.go:115] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1019 13:15:15.920621  482757 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.549µs
	I1019 13:15:15.920636  482757 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1019 13:15:15.920645  482757 cache.go:107] acquiring lock: {Name:mk8151f8aaf53ecf9ac26af60dbb866094ee01c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:15.920670  482757 cache.go:115] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1019 13:15:15.920675  482757 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 31.188µs
	I1019 13:15:15.920681  482757 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1019 13:15:15.920688  482757 cache.go:107] acquiring lock: {Name:mk588b1a76127636b20b5749ab1b86e294b230e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:15.920713  482757 cache.go:115] /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1019 13:15:15.920718  482757 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 30.532µs
	I1019 13:15:15.920723  482757 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1019 13:15:15.920729  482757 cache.go:87] Successfully saved all images to host disk.
	I1019 13:15:15.939578  482757 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 13:15:15.939601  482757 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 13:15:15.939619  482757 cache.go:232] Successfully downloaded all kic artifacts
	I1019 13:15:15.939650  482757 start.go:360] acquireMachinesLock for no-preload-108149: {Name:mk1e7d61a5a88a341b3d8e7634b6c23c2df5dac5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:15.939705  482757 start.go:364] duration metric: took 37.334µs to acquireMachinesLock for "no-preload-108149"
	I1019 13:15:15.939730  482757 start.go:96] Skipping create...Using existing machine configuration
	I1019 13:15:15.939736  482757 fix.go:54] fixHost starting: 
	I1019 13:15:15.939982  482757 cli_runner.go:164] Run: docker container inspect no-preload-108149 --format={{.State.Status}}
	I1019 13:15:15.957187  482757 fix.go:112] recreateIfNeeded on no-preload-108149: state=Stopped err=<nil>
	W1019 13:15:15.957218  482757 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 13:15:15.962343  482757 out.go:252] * Restarting existing docker container for "no-preload-108149" ...
	I1019 13:15:15.962440  482757 cli_runner.go:164] Run: docker start no-preload-108149
	I1019 13:15:16.269415  482757 cli_runner.go:164] Run: docker container inspect no-preload-108149 --format={{.State.Status}}
	I1019 13:15:16.289992  482757 kic.go:430] container "no-preload-108149" state is running.
	I1019 13:15:16.290440  482757 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-108149
	I1019 13:15:16.313699  482757 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/config.json ...
	I1019 13:15:16.313939  482757 machine.go:93] provisionDockerMachine start ...
	I1019 13:15:16.314002  482757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:15:16.334551  482757 main.go:141] libmachine: Using SSH client type: native
	I1019 13:15:16.334880  482757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1019 13:15:16.334889  482757 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 13:15:16.335457  482757 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45616->127.0.0.1:33433: read: connection reset by peer
	I1019 13:15:19.502178  482757 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-108149
	
	I1019 13:15:19.502215  482757 ubuntu.go:182] provisioning hostname "no-preload-108149"
	I1019 13:15:19.502285  482757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:15:19.519798  482757 main.go:141] libmachine: Using SSH client type: native
	I1019 13:15:19.520130  482757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1019 13:15:19.520149  482757 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-108149 && echo "no-preload-108149" | sudo tee /etc/hostname
	I1019 13:15:19.695580  482757 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-108149
	
	I1019 13:15:19.695666  482757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:15:19.729943  482757 main.go:141] libmachine: Using SSH client type: native
	I1019 13:15:19.730274  482757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1019 13:15:19.730298  482757 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-108149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-108149/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-108149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 13:15:19.889949  482757 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 13:15:19.889989  482757 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 13:15:19.890042  482757 ubuntu.go:190] setting up certificates
	I1019 13:15:19.890053  482757 provision.go:84] configureAuth start
	I1019 13:15:19.890137  482757 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-108149
	I1019 13:15:19.915806  482757 provision.go:143] copyHostCerts
	I1019 13:15:19.915874  482757 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem, removing ...
	I1019 13:15:19.915903  482757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem
	I1019 13:15:19.915980  482757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 13:15:19.916091  482757 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem, removing ...
	I1019 13:15:19.916105  482757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem
	I1019 13:15:19.916135  482757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 13:15:19.916201  482757 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem, removing ...
	I1019 13:15:19.916211  482757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem
	I1019 13:15:19.916236  482757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 13:15:19.916291  482757 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.no-preload-108149 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-108149]
	I1019 13:15:20.113819  482757 provision.go:177] copyRemoteCerts
	I1019 13:15:20.113917  482757 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 13:15:20.113978  482757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:15:20.132240  482757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/no-preload-108149/id_rsa Username:docker}
	I1019 13:15:20.237661  482757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 13:15:20.261364  482757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 13:15:20.280018  482757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 13:15:20.297601  482757 provision.go:87] duration metric: took 407.529478ms to configureAuth
	I1019 13:15:20.297626  482757 ubuntu.go:206] setting minikube options for container-runtime
	I1019 13:15:20.297889  482757 config.go:182] Loaded profile config "no-preload-108149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:15:20.298003  482757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:15:20.315415  482757 main.go:141] libmachine: Using SSH client type: native
	I1019 13:15:20.315747  482757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1019 13:15:20.315766  482757 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 13:15:20.681357  482757 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 13:15:20.681394  482757 machine.go:96] duration metric: took 4.36744593s to provisionDockerMachine
	I1019 13:15:20.681405  482757 start.go:293] postStartSetup for "no-preload-108149" (driver="docker")
	I1019 13:15:20.681416  482757 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 13:15:20.681496  482757 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 13:15:20.681545  482757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:15:20.717961  482757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/no-preload-108149/id_rsa Username:docker}
	I1019 13:15:20.826086  482757 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 13:15:20.831086  482757 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 13:15:20.831153  482757 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 13:15:20.831178  482757 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/addons for local assets ...
	I1019 13:15:20.831266  482757 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/files for local assets ...
	I1019 13:15:20.831398  482757 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem -> 2945182.pem in /etc/ssl/certs
	I1019 13:15:20.831550  482757 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 13:15:20.840732  482757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:15:20.879080  482757 start.go:296] duration metric: took 197.659825ms for postStartSetup
	I1019 13:15:20.879159  482757 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 13:15:20.879218  482757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:15:20.909879  482757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/no-preload-108149/id_rsa Username:docker}
	I1019 13:15:21.015283  482757 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 13:15:21.021024  482757 fix.go:56] duration metric: took 5.081281831s for fixHost
	I1019 13:15:21.021047  482757 start.go:83] releasing machines lock for "no-preload-108149", held for 5.081328511s
	I1019 13:15:21.021123  482757 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-108149
	I1019 13:15:21.047617  482757 ssh_runner.go:195] Run: cat /version.json
	I1019 13:15:21.047666  482757 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 13:15:21.047720  482757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:15:21.047738  482757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:15:21.074867  482757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/no-preload-108149/id_rsa Username:docker}
	I1019 13:15:21.106564  482757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/no-preload-108149/id_rsa Username:docker}
	I1019 13:15:21.198909  482757 ssh_runner.go:195] Run: systemctl --version
	I1019 13:15:21.380526  482757 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 13:15:21.446874  482757 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 13:15:21.453551  482757 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 13:15:21.453652  482757 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 13:15:21.463101  482757 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 13:15:21.463124  482757 start.go:495] detecting cgroup driver to use...
	I1019 13:15:21.463157  482757 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 13:15:21.463204  482757 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 13:15:21.480227  482757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 13:15:21.496855  482757 docker.go:218] disabling cri-docker service (if available) ...
	I1019 13:15:21.496915  482757 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 13:15:21.524999  482757 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 13:15:21.540984  482757 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 13:15:21.693878  482757 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 13:15:21.840397  482757 docker.go:234] disabling docker service ...
	I1019 13:15:21.840466  482757 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 13:15:21.862585  482757 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 13:15:21.877049  482757 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 13:15:22.053164  482757 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 13:15:22.206605  482757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 13:15:22.220351  482757 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 13:15:22.245319  482757 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 13:15:22.245389  482757 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:22.255179  482757 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 13:15:22.255248  482757 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:22.266698  482757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:22.276194  482757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:22.287900  482757 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 13:15:22.298364  482757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:22.308833  482757 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:22.317860  482757 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:22.327138  482757 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 13:15:22.336627  482757 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 13:15:22.345141  482757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:15:22.489750  482757 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 13:15:22.647109  482757 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 13:15:22.647225  482757 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 13:15:22.651508  482757 start.go:563] Will wait 60s for crictl version
	I1019 13:15:22.651623  482757 ssh_runner.go:195] Run: which crictl
	I1019 13:15:22.655995  482757 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 13:15:22.694840  482757 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 13:15:22.694969  482757 ssh_runner.go:195] Run: crio --version
	I1019 13:15:22.742790  482757 ssh_runner.go:195] Run: crio --version
	I1019 13:15:22.787413  482757 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 19 13:14:59 old-k8s-version-842494 crio[650]: time="2025-10-19T13:14:59.111761956Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:14:59 old-k8s-version-842494 crio[650]: time="2025-10-19T13:14:59.118398796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:14:59 old-k8s-version-842494 crio[650]: time="2025-10-19T13:14:59.119090957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:14:59 old-k8s-version-842494 crio[650]: time="2025-10-19T13:14:59.134656236Z" level=info msg="Created container 294540375ec117ef5624146472fd4938138577d72f86bb7e9d0ed89c55643c62: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd/dashboard-metrics-scraper" id=5ea97f28-1261-447c-a173-39c8fbc0fd6b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:14:59 old-k8s-version-842494 crio[650]: time="2025-10-19T13:14:59.135704332Z" level=info msg="Starting container: 294540375ec117ef5624146472fd4938138577d72f86bb7e9d0ed89c55643c62" id=678f1915-8a5b-4421-b438-6994cc5082cc name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:14:59 old-k8s-version-842494 crio[650]: time="2025-10-19T13:14:59.138266089Z" level=info msg="Started container" PID=1635 containerID=294540375ec117ef5624146472fd4938138577d72f86bb7e9d0ed89c55643c62 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd/dashboard-metrics-scraper id=678f1915-8a5b-4421-b438-6994cc5082cc name=/runtime.v1.RuntimeService/StartContainer sandboxID=a5dd2db60952834521d59f2bdb0bbf53ee403f34efc5727664fd18bb040e1345
	Oct 19 13:14:59 old-k8s-version-842494 conmon[1633]: conmon 294540375ec117ef5624 <ninfo>: container 1635 exited with status 1
	Oct 19 13:14:59 old-k8s-version-842494 crio[650]: time="2025-10-19T13:14:59.428836682Z" level=info msg="Removing container: 6c2ddd31210ff568d3b7e73927ba5c877b421bc1e73fc15e2ff547564d00766d" id=747a5833-c2f2-4e3a-bffb-16a565f8ce30 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:14:59 old-k8s-version-842494 crio[650]: time="2025-10-19T13:14:59.440458491Z" level=info msg="Error loading conmon cgroup of container 6c2ddd31210ff568d3b7e73927ba5c877b421bc1e73fc15e2ff547564d00766d: cgroup deleted" id=747a5833-c2f2-4e3a-bffb-16a565f8ce30 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:14:59 old-k8s-version-842494 crio[650]: time="2025-10-19T13:14:59.445426787Z" level=info msg="Removed container 6c2ddd31210ff568d3b7e73927ba5c877b421bc1e73fc15e2ff547564d00766d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd/dashboard-metrics-scraper" id=747a5833-c2f2-4e3a-bffb-16a565f8ce30 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.451678859Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.455799915Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.455835879Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.455858033Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.459085424Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.459124358Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.45914867Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.462194372Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.462230344Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.46225313Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.465502109Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.46553621Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.465562844Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.468873633Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:15:06 old-k8s-version-842494 crio[650]: time="2025-10-19T13:15:06.468907791Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	294540375ec11       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago       Exited              dashboard-metrics-scraper   2                   a5dd2db609528       dashboard-metrics-scraper-5f989dc9cf-f58zd       kubernetes-dashboard
	b705a2c9010d5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   edd1ecf7569c7       storage-provisioner                              kube-system
	f6ec3fd90a976       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago       Running             kubernetes-dashboard        0                   0d683f26491a9       kubernetes-dashboard-8694d4445c-7m5tv            kubernetes-dashboard
	cf9bd9d8e3217       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           59 seconds ago       Running             coredns                     1                   5ef79b67b0965       coredns-5dd5756b68-5mdz7                         kube-system
	f27de09649c7d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           59 seconds ago       Running             busybox                     1                   457988159c83c       busybox                                          default
	65042321b8cee       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           59 seconds ago       Running             kube-proxy                  1                   fc3d94733f27d       kube-proxy-v7wq7                                 kube-system
	dd47350cd7bf7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   edd1ecf7569c7       storage-provisioner                              kube-system
	d1b9315af72bf       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   5f02c467b1212       kindnet-7lwtw                                    kube-system
	7b9e97a29ebf3       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   b3bc62cf3225b       kube-controller-manager-old-k8s-version-842494   kube-system
	32e954f04c57f       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   825babba820ef       etcd-old-k8s-version-842494                      kube-system
	379b611212ba2       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   9c806d7c89897       kube-scheduler-old-k8s-version-842494            kube-system
	78f63059c2fea       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   2591a3c5640bb       kube-apiserver-old-k8s-version-842494            kube-system
	
	
	==> coredns [cf9bd9d8e3217e21c9a1fc598471a6f3977f811a901df8104c4be0dd2f49a8fd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57361 - 42043 "HINFO IN 8580207794113153234.9172770925368522785. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018591034s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-842494
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-842494
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=old-k8s-version-842494
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T13_13_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 13:13:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-842494
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 13:15:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 13:15:15 +0000   Sun, 19 Oct 2025 13:12:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 13:15:15 +0000   Sun, 19 Oct 2025 13:12:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 13:15:15 +0000   Sun, 19 Oct 2025 13:12:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 13:15:15 +0000   Sun, 19 Oct 2025 13:13:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-842494
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                ff91876e-8bed-4e46-9175-4f587101f24f
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 coredns-5dd5756b68-5mdz7                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m7s
	  kube-system                 etcd-old-k8s-version-842494                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m20s
	  kube-system                 kindnet-7lwtw                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m7s
	  kube-system                 kube-apiserver-old-k8s-version-842494             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-controller-manager-old-k8s-version-842494    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-proxy-v7wq7                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-scheduler-old-k8s-version-842494             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-f58zd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-7m5tv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m6s                   kube-proxy       
	  Normal  Starting                 58s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m29s (x8 over 2m29s)  kubelet          Node old-k8s-version-842494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m29s (x8 over 2m29s)  kubelet          Node old-k8s-version-842494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m29s (x8 over 2m29s)  kubelet          Node old-k8s-version-842494 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m20s                  kubelet          Node old-k8s-version-842494 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m20s                  kubelet          Node old-k8s-version-842494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m20s                  kubelet          Node old-k8s-version-842494 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           2m8s                   node-controller  Node old-k8s-version-842494 event: Registered Node old-k8s-version-842494 in Controller
	  Normal  NodeReady                112s                   kubelet          Node old-k8s-version-842494 status is now: NodeReady
	  Normal  Starting                 72s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  71s (x8 over 71s)      kubelet          Node old-k8s-version-842494 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s (x8 over 71s)      kubelet          Node old-k8s-version-842494 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s (x8 over 71s)      kubelet          Node old-k8s-version-842494 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                    node-controller  Node old-k8s-version-842494 event: Registered Node old-k8s-version-842494 in Controller
	
	
	==> dmesg <==
	[Oct19 12:51] overlayfs: idmapped layers are currently not supported
	[Oct19 12:52] overlayfs: idmapped layers are currently not supported
	[Oct19 12:53] overlayfs: idmapped layers are currently not supported
	[Oct19 12:54] overlayfs: idmapped layers are currently not supported
	[Oct19 12:56] overlayfs: idmapped layers are currently not supported
	[ +16.315179] overlayfs: idmapped layers are currently not supported
	[ +11.914063] overlayfs: idmapped layers are currently not supported
	[Oct19 12:57] overlayfs: idmapped layers are currently not supported
	[Oct19 12:58] overlayfs: idmapped layers are currently not supported
	[ +48.481184] overlayfs: idmapped layers are currently not supported
	[Oct19 12:59] overlayfs: idmapped layers are currently not supported
	[Oct19 13:00] overlayfs: idmapped layers are currently not supported
	[Oct19 13:01] overlayfs: idmapped layers are currently not supported
	[Oct19 13:04] overlayfs: idmapped layers are currently not supported
	[Oct19 13:05] overlayfs: idmapped layers are currently not supported
	[Oct19 13:06] overlayfs: idmapped layers are currently not supported
	[Oct19 13:08] overlayfs: idmapped layers are currently not supported
	[ +38.759554] overlayfs: idmapped layers are currently not supported
	[Oct19 13:10] overlayfs: idmapped layers are currently not supported
	[Oct19 13:11] overlayfs: idmapped layers are currently not supported
	[Oct19 13:12] overlayfs: idmapped layers are currently not supported
	[ +39.991818] overlayfs: idmapped layers are currently not supported
	[Oct19 13:13] overlayfs: idmapped layers are currently not supported
	[Oct19 13:14] overlayfs: idmapped layers are currently not supported
	[Oct19 13:15] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [32e954f04c57f4f9b9177fcb833b4861a5da3dff1bf1fdbdbd2c4d4bc0ebf7a3] <==
	{"level":"info","ts":"2025-10-19T13:14:16.025486Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-19T13:14:16.025569Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T13:14:16.025595Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T13:14:16.027705Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T13:14:16.027746Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T13:14:16.027754Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T13:14:16.081564Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-19T13:14:16.10383Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-19T13:14:16.094943Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-19T13:14:16.105169Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-19T13:14:16.122038Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-19T13:14:16.971832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-19T13:14:16.97196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-19T13:14:16.97201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-19T13:14:16.972051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-19T13:14:16.972084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-19T13:14:16.972122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-19T13:14:16.972163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-19T13:14:16.974486Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-842494 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-19T13:14:16.974704Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T13:14:16.976522Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-19T13:14:16.98175Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T13:14:16.982776Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-19T13:14:17.005735Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-19T13:14:17.005843Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:15:25 up  2:57,  0 user,  load average: 2.88, 2.90, 2.60
	Linux old-k8s-version-842494 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d1b9315af72bf41414a7e6d2ce0d7b027d492620db2491d7a2387dc8a91676c4] <==
	I1019 13:14:26.222268       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:14:26.225842       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 13:14:26.226121       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:14:26.226135       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:14:26.226149       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:14:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:14:26.449607       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:14:26.449625       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:14:26.449633       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:14:26.450545       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 13:14:56.451730       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 13:14:56.451894       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 13:14:56.451982       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1019 13:14:56.452066       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1019 13:14:57.950398       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 13:14:57.950499       1 metrics.go:72] Registering metrics
	I1019 13:14:57.950577       1 controller.go:711] "Syncing nftables rules"
	I1019 13:15:06.449497       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 13:15:06.450899       1 main.go:301] handling current node
	I1019 13:15:16.449238       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 13:15:16.449374       1 main.go:301] handling current node
	
	
	==> kube-apiserver [78f63059c2fea7e2266edb01a7a8d4ae119845e91cc5ae1b0a044e0c22443f3e] <==
	I1019 13:14:24.374645       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1019 13:14:24.375655       1 aggregator.go:166] initial CRD sync complete...
	I1019 13:14:24.375997       1 autoregister_controller.go:141] Starting autoregister controller
	I1019 13:14:24.376051       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 13:14:24.449147       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 13:14:24.473067       1 trace.go:236] Trace[1639828164]: "DeltaFIFO Pop Process" ID:v1.admissionregistration.k8s.io,Depth:19,Reason:slow event handlers blocking the queue (19-Oct-2025 13:14:24.364) (total time: 108ms):
	Trace[1639828164]: [108.211658ms] [108.211658ms] END
	I1019 13:14:24.474117       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1019 13:14:24.474143       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1019 13:14:24.485869       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 13:14:24.546324       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1019 13:14:24.578635       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1019 13:14:24.585585       1 cache.go:39] Caches are synced for autoregister controller
	E1019 13:14:24.669294       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 13:14:24.878933       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 13:14:27.830740       1 controller.go:624] quota admission added evaluator for: namespaces
	I1019 13:14:27.885783       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1019 13:14:27.915882       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 13:14:27.942888       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 13:14:27.961031       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1019 13:14:28.030324       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.248.70"}
	I1019 13:14:28.051699       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.98.161"}
	I1019 13:14:37.572289       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 13:14:37.581962       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1019 13:14:37.614552       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [7b9e97a29ebf3e504604e73866544e7d0fd265d8ac39504373c3597d4796cbae] <==
	I1019 13:14:37.647696       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-f58zd"
	I1019 13:14:37.648150       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-7m5tv"
	I1019 13:14:37.667096       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="63.279098ms"
	I1019 13:14:37.672046       1 shared_informer.go:318] Caches are synced for resource quota
	I1019 13:14:37.672226       1 shared_informer.go:318] Caches are synced for resource quota
	I1019 13:14:37.694832       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="90.676912ms"
	I1019 13:14:37.701850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="34.66996ms"
	I1019 13:14:37.702030       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.884µs"
	I1019 13:14:37.702155       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.998µs"
	I1019 13:14:37.712376       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.476664ms"
	I1019 13:14:37.712567       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="78.023µs"
	I1019 13:14:37.722447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.434µs"
	I1019 13:14:37.738177       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="56.206µs"
	I1019 13:14:38.029989       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 13:14:38.052562       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 13:14:38.052608       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1019 13:14:42.381273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.561µs"
	I1019 13:14:43.405658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.475µs"
	I1019 13:14:44.407138       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.392µs"
	I1019 13:14:48.434575       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.291343ms"
	I1019 13:14:48.434703       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="86.097µs"
	I1019 13:14:59.448120       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="48.173µs"
	I1019 13:15:05.632513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.584257ms"
	I1019 13:15:05.633034       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.519µs"
	I1019 13:15:07.991909       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.636µs"
	
	
	==> kube-proxy [65042321b8cee3cb9ba55a04d613c419d16e93b98467daa77830bad1dab0db52] <==
	I1019 13:14:26.694298       1 server_others.go:69] "Using iptables proxy"
	I1019 13:14:26.796308       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1019 13:14:26.950998       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:14:26.958407       1 server_others.go:152] "Using iptables Proxier"
	I1019 13:14:26.958444       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1019 13:14:26.958452       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1019 13:14:26.958480       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1019 13:14:26.958690       1 server.go:846] "Version info" version="v1.28.0"
	I1019 13:14:26.958700       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:14:26.973271       1 config.go:188] "Starting service config controller"
	I1019 13:14:26.973304       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1019 13:14:26.973336       1 config.go:97] "Starting endpoint slice config controller"
	I1019 13:14:26.973340       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1019 13:14:26.976345       1 config.go:315] "Starting node config controller"
	I1019 13:14:26.976385       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1019 13:14:27.073915       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1019 13:14:27.073971       1 shared_informer.go:318] Caches are synced for service config
	I1019 13:14:27.077304       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [379b611212ba298f43db75b4d6fddb918b70f6a8d89ff799a0a9541dacd968cd] <==
	W1019 13:14:24.047361       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 13:14:24.047388       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 13:14:24.047398       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 13:14:24.047404       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 13:14:24.326207       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1019 13:14:24.333969       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:14:24.338566       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1019 13:14:24.341917       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:14:24.348425       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1019 13:14:24.341956       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W1019 13:14:24.374224       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1019 13:14:24.374273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1019 13:14:24.374364       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1019 13:14:24.374380       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1019 13:14:24.374473       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1019 13:14:24.374484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1019 13:14:24.374718       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1019 13:14:24.374742       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1019 13:14:24.378292       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1019 13:14:24.378322       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1019 13:14:24.378460       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1019 13:14:24.378477       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1019 13:14:24.378538       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1019 13:14:24.378550       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1019 13:14:24.454119       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 19 13:14:37 old-k8s-version-842494 kubelet[777]: I1019 13:14:37.785875     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/230d520c-cc44-4ec0-b4e5-9535bb6640cd-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-f58zd\" (UID: \"230d520c-cc44-4ec0-b4e5-9535bb6640cd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd"
	Oct 19 13:14:37 old-k8s-version-842494 kubelet[777]: I1019 13:14:37.786218     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9753fd7f-7e7b-4446-adf9-ab41cecf44d6-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-7m5tv\" (UID: \"9753fd7f-7e7b-4446-adf9-ab41cecf44d6\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7m5tv"
	Oct 19 13:14:37 old-k8s-version-842494 kubelet[777]: I1019 13:14:37.786270     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k972x\" (UniqueName: \"kubernetes.io/projected/9753fd7f-7e7b-4446-adf9-ab41cecf44d6-kube-api-access-k972x\") pod \"kubernetes-dashboard-8694d4445c-7m5tv\" (UID: \"9753fd7f-7e7b-4446-adf9-ab41cecf44d6\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7m5tv"
	Oct 19 13:14:37 old-k8s-version-842494 kubelet[777]: I1019 13:14:37.786305     777 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxpjg\" (UniqueName: \"kubernetes.io/projected/230d520c-cc44-4ec0-b4e5-9535bb6640cd-kube-api-access-pxpjg\") pod \"dashboard-metrics-scraper-5f989dc9cf-f58zd\" (UID: \"230d520c-cc44-4ec0-b4e5-9535bb6640cd\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd"
	Oct 19 13:14:38 old-k8s-version-842494 kubelet[777]: W1019 13:14:38.018272     777 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/143af978a0b455bc334b87c1c8127c2caaa521684310e8ef206a9f484c4a28dd/crio-0d683f26491a92ecc6b95359c7dcf722cb9037b06d5c4ea8999a3b6eecc1e104 WatchSource:0}: Error finding container 0d683f26491a92ecc6b95359c7dcf722cb9037b06d5c4ea8999a3b6eecc1e104: Status 404 returned error can't find the container with id 0d683f26491a92ecc6b95359c7dcf722cb9037b06d5c4ea8999a3b6eecc1e104
	Oct 19 13:14:42 old-k8s-version-842494 kubelet[777]: I1019 13:14:42.367695     777 scope.go:117] "RemoveContainer" containerID="28a068e846942cfda600e70a04682a52b15be800ada906289239b96cfbf9f168"
	Oct 19 13:14:43 old-k8s-version-842494 kubelet[777]: I1019 13:14:43.376306     777 scope.go:117] "RemoveContainer" containerID="6c2ddd31210ff568d3b7e73927ba5c877b421bc1e73fc15e2ff547564d00766d"
	Oct 19 13:14:43 old-k8s-version-842494 kubelet[777]: I1019 13:14:43.377374     777 scope.go:117] "RemoveContainer" containerID="28a068e846942cfda600e70a04682a52b15be800ada906289239b96cfbf9f168"
	Oct 19 13:14:43 old-k8s-version-842494 kubelet[777]: E1019 13:14:43.377427     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f58zd_kubernetes-dashboard(230d520c-cc44-4ec0-b4e5-9535bb6640cd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd" podUID="230d520c-cc44-4ec0-b4e5-9535bb6640cd"
	Oct 19 13:14:44 old-k8s-version-842494 kubelet[777]: I1019 13:14:44.384803     777 scope.go:117] "RemoveContainer" containerID="6c2ddd31210ff568d3b7e73927ba5c877b421bc1e73fc15e2ff547564d00766d"
	Oct 19 13:14:44 old-k8s-version-842494 kubelet[777]: E1019 13:14:44.385078     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f58zd_kubernetes-dashboard(230d520c-cc44-4ec0-b4e5-9535bb6640cd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd" podUID="230d520c-cc44-4ec0-b4e5-9535bb6640cd"
	Oct 19 13:14:47 old-k8s-version-842494 kubelet[777]: I1019 13:14:47.978080     777 scope.go:117] "RemoveContainer" containerID="6c2ddd31210ff568d3b7e73927ba5c877b421bc1e73fc15e2ff547564d00766d"
	Oct 19 13:14:47 old-k8s-version-842494 kubelet[777]: E1019 13:14:47.978384     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f58zd_kubernetes-dashboard(230d520c-cc44-4ec0-b4e5-9535bb6640cd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd" podUID="230d520c-cc44-4ec0-b4e5-9535bb6640cd"
	Oct 19 13:14:56 old-k8s-version-842494 kubelet[777]: I1019 13:14:56.416210     777 scope.go:117] "RemoveContainer" containerID="dd47350cd7bf7b6f9e2be9050bc252a57e4193e333974fa6bd6ac582509ea4b3"
	Oct 19 13:14:56 old-k8s-version-842494 kubelet[777]: I1019 13:14:56.446334     777 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7m5tv" podStartSLOduration=9.657857245 podCreationTimestamp="2025-10-19 13:14:37 +0000 UTC" firstStartedPulling="2025-10-19 13:14:38.022499167 +0000 UTC m=+24.270004137" lastFinishedPulling="2025-10-19 13:14:47.81014868 +0000 UTC m=+34.057653658" observedRunningTime="2025-10-19 13:14:48.413344657 +0000 UTC m=+34.660849635" watchObservedRunningTime="2025-10-19 13:14:56.445506766 +0000 UTC m=+42.693011744"
	Oct 19 13:14:59 old-k8s-version-842494 kubelet[777]: I1019 13:14:59.108488     777 scope.go:117] "RemoveContainer" containerID="6c2ddd31210ff568d3b7e73927ba5c877b421bc1e73fc15e2ff547564d00766d"
	Oct 19 13:14:59 old-k8s-version-842494 kubelet[777]: I1019 13:14:59.426849     777 scope.go:117] "RemoveContainer" containerID="6c2ddd31210ff568d3b7e73927ba5c877b421bc1e73fc15e2ff547564d00766d"
	Oct 19 13:14:59 old-k8s-version-842494 kubelet[777]: I1019 13:14:59.427120     777 scope.go:117] "RemoveContainer" containerID="294540375ec117ef5624146472fd4938138577d72f86bb7e9d0ed89c55643c62"
	Oct 19 13:14:59 old-k8s-version-842494 kubelet[777]: E1019 13:14:59.427386     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f58zd_kubernetes-dashboard(230d520c-cc44-4ec0-b4e5-9535bb6640cd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd" podUID="230d520c-cc44-4ec0-b4e5-9535bb6640cd"
	Oct 19 13:15:07 old-k8s-version-842494 kubelet[777]: I1019 13:15:07.976904     777 scope.go:117] "RemoveContainer" containerID="294540375ec117ef5624146472fd4938138577d72f86bb7e9d0ed89c55643c62"
	Oct 19 13:15:07 old-k8s-version-842494 kubelet[777]: E1019 13:15:07.977228     777 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-f58zd_kubernetes-dashboard(230d520c-cc44-4ec0-b4e5-9535bb6640cd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-f58zd" podUID="230d520c-cc44-4ec0-b4e5-9535bb6640cd"
	Oct 19 13:15:19 old-k8s-version-842494 kubelet[777]: I1019 13:15:19.302657     777 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 19 13:15:19 old-k8s-version-842494 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 13:15:19 old-k8s-version-842494 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 13:15:19 old-k8s-version-842494 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f6ec3fd90a9761e8bdef08c9b10e5ab281f98ed0dcb8b87dd9247f1a32992dbf] <==
	2025/10/19 13:14:47 Using namespace: kubernetes-dashboard
	2025/10/19 13:14:47 Using in-cluster config to connect to apiserver
	2025/10/19 13:14:47 Using secret token for csrf signing
	2025/10/19 13:14:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 13:14:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 13:14:47 Successful initial request to the apiserver, version: v1.28.0
	2025/10/19 13:14:47 Generating JWE encryption key
	2025/10/19 13:14:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 13:14:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 13:14:48 Initializing JWE encryption key from synchronized object
	2025/10/19 13:14:48 Creating in-cluster Sidecar client
	2025/10/19 13:14:48 Serving insecurely on HTTP port: 9090
	2025/10/19 13:14:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 13:15:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 13:14:47 Starting overwatch
	
	
	==> storage-provisioner [b705a2c9010d53b604a774ef561db9d3e61e0b62bf535ab25415f9195b64ff30] <==
	I1019 13:14:56.461270       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 13:14:56.476700       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 13:14:56.476746       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1019 13:15:13.874432       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 13:15:13.874719       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-842494_94251d4c-2e21-4f49-9370-acba525d5bab!
	I1019 13:15:13.877103       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba2428d6-3741-40ad-80da-985be3fb4b28", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-842494_94251d4c-2e21-4f49-9370-acba525d5bab became leader
	I1019 13:15:13.974878       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-842494_94251d4c-2e21-4f49-9370-acba525d5bab!
	
	
	==> storage-provisioner [dd47350cd7bf7b6f9e2be9050bc252a57e4193e333974fa6bd6ac582509ea4b3] <==
	I1019 13:14:26.213425       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 13:14:56.224154       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-842494 -n old-k8s-version-842494
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-842494 -n old-k8s-version-842494: exit status 2 (524.619274ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-842494 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (8.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-108149 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-108149 --alsologtostderr -v=1: exit status 80 (2.567189088s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-108149 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 13:16:28.198564  488736 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:16:28.198723  488736 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:16:28.198746  488736 out.go:374] Setting ErrFile to fd 2...
	I1019 13:16:28.198752  488736 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:16:28.199038  488736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:16:28.199325  488736 out.go:368] Setting JSON to false
	I1019 13:16:28.199368  488736 mustload.go:65] Loading cluster: no-preload-108149
	I1019 13:16:28.199837  488736 config.go:182] Loaded profile config "no-preload-108149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:16:28.200508  488736 cli_runner.go:164] Run: docker container inspect no-preload-108149 --format={{.State.Status}}
	I1019 13:16:28.220388  488736 host.go:66] Checking if "no-preload-108149" exists ...
	I1019 13:16:28.220722  488736 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:16:28.281634  488736 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-19 13:16:28.271793037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:16:28.282493  488736 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-108149 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 13:16:28.285810  488736 out.go:179] * Pausing node no-preload-108149 ... 
	I1019 13:16:28.288682  488736 host.go:66] Checking if "no-preload-108149" exists ...
	I1019 13:16:28.289024  488736 ssh_runner.go:195] Run: systemctl --version
	I1019 13:16:28.289074  488736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-108149
	I1019 13:16:28.315849  488736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/no-preload-108149/id_rsa Username:docker}
	I1019 13:16:28.428246  488736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:16:28.442973  488736 pause.go:52] kubelet running: true
	I1019 13:16:28.443050  488736 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:16:28.693537  488736 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:16:28.693617  488736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:16:28.763549  488736 cri.go:89] found id: "6643d7449e536bebc7c48cb509e939d206eca0e67efbecc9a49f6f230d6a8f2e"
	I1019 13:16:28.763573  488736 cri.go:89] found id: "35a081210b7fa08acbe3227adf5610734dfa60738cda733fc91359b203bcf29b"
	I1019 13:16:28.763578  488736 cri.go:89] found id: "3602ce3b8d0b42a07e319435c2d257a4f4c245eb0405e0ad593bf94803f45907"
	I1019 13:16:28.763582  488736 cri.go:89] found id: "d7af5087f11ac0a282a7c09f5c3f2ad9affeab8823717f75f713a854c8124884"
	I1019 13:16:28.763585  488736 cri.go:89] found id: "f06654b2d2683ec240f70fa86e309b5a103311a29fb5afb2f214482a14902133"
	I1019 13:16:28.763589  488736 cri.go:89] found id: "0452bd1f37844e20d71713464f7c02412906aa5aeab0336266163b06aba35d56"
	I1019 13:16:28.763592  488736 cri.go:89] found id: "24a75ddccb641f284753e265035d0ec049f86894b9a8bb4c8eb68267f2a6bbd3"
	I1019 13:16:28.763594  488736 cri.go:89] found id: "b649715b02d1cdf3f028d00c9f1eda59d4501cabfe3bf7e05ad588e094515f85"
	I1019 13:16:28.763597  488736 cri.go:89] found id: "69cd340c87d966c00eb54338c8930e6a5166ffc684c24d32e2f7db4bde1a9182"
	I1019 13:16:28.763604  488736 cri.go:89] found id: "e89abbf84a6ae8fad71347e209ff96a8ac6de8edccf16176b6ff8c53cdf3116b"
	I1019 13:16:28.763607  488736 cri.go:89] found id: "75b1666aca773065101164715baec4b2ea6e97910e9b1b816056fe57b3894d8b"
	I1019 13:16:28.763610  488736 cri.go:89] found id: ""
	I1019 13:16:28.763658  488736 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:16:28.775181  488736 retry.go:31] will retry after 343.455919ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:16:28Z" level=error msg="open /run/runc: no such file or directory"
	I1019 13:16:29.119589  488736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:16:29.133044  488736 pause.go:52] kubelet running: false
	I1019 13:16:29.133165  488736 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:16:29.316511  488736 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:16:29.316588  488736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:16:29.387796  488736 cri.go:89] found id: "6643d7449e536bebc7c48cb509e939d206eca0e67efbecc9a49f6f230d6a8f2e"
	I1019 13:16:29.387858  488736 cri.go:89] found id: "35a081210b7fa08acbe3227adf5610734dfa60738cda733fc91359b203bcf29b"
	I1019 13:16:29.387877  488736 cri.go:89] found id: "3602ce3b8d0b42a07e319435c2d257a4f4c245eb0405e0ad593bf94803f45907"
	I1019 13:16:29.387897  488736 cri.go:89] found id: "d7af5087f11ac0a282a7c09f5c3f2ad9affeab8823717f75f713a854c8124884"
	I1019 13:16:29.387907  488736 cri.go:89] found id: "f06654b2d2683ec240f70fa86e309b5a103311a29fb5afb2f214482a14902133"
	I1019 13:16:29.387911  488736 cri.go:89] found id: "0452bd1f37844e20d71713464f7c02412906aa5aeab0336266163b06aba35d56"
	I1019 13:16:29.387914  488736 cri.go:89] found id: "24a75ddccb641f284753e265035d0ec049f86894b9a8bb4c8eb68267f2a6bbd3"
	I1019 13:16:29.387917  488736 cri.go:89] found id: "b649715b02d1cdf3f028d00c9f1eda59d4501cabfe3bf7e05ad588e094515f85"
	I1019 13:16:29.387933  488736 cri.go:89] found id: "69cd340c87d966c00eb54338c8930e6a5166ffc684c24d32e2f7db4bde1a9182"
	I1019 13:16:29.387952  488736 cri.go:89] found id: "e89abbf84a6ae8fad71347e209ff96a8ac6de8edccf16176b6ff8c53cdf3116b"
	I1019 13:16:29.387961  488736 cri.go:89] found id: "75b1666aca773065101164715baec4b2ea6e97910e9b1b816056fe57b3894d8b"
	I1019 13:16:29.387965  488736 cri.go:89] found id: ""
	I1019 13:16:29.388026  488736 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:16:29.399155  488736 retry.go:31] will retry after 338.983114ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:16:29Z" level=error msg="open /run/runc: no such file or directory"
	I1019 13:16:29.738747  488736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:16:29.752267  488736 pause.go:52] kubelet running: false
	I1019 13:16:29.752367  488736 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:16:29.931468  488736 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:16:29.931572  488736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:16:30.001454  488736 cri.go:89] found id: "6643d7449e536bebc7c48cb509e939d206eca0e67efbecc9a49f6f230d6a8f2e"
	I1019 13:16:30.001541  488736 cri.go:89] found id: "35a081210b7fa08acbe3227adf5610734dfa60738cda733fc91359b203bcf29b"
	I1019 13:16:30.001561  488736 cri.go:89] found id: "3602ce3b8d0b42a07e319435c2d257a4f4c245eb0405e0ad593bf94803f45907"
	I1019 13:16:30.001582  488736 cri.go:89] found id: "d7af5087f11ac0a282a7c09f5c3f2ad9affeab8823717f75f713a854c8124884"
	I1019 13:16:30.001605  488736 cri.go:89] found id: "f06654b2d2683ec240f70fa86e309b5a103311a29fb5afb2f214482a14902133"
	I1019 13:16:30.001626  488736 cri.go:89] found id: "0452bd1f37844e20d71713464f7c02412906aa5aeab0336266163b06aba35d56"
	I1019 13:16:30.001646  488736 cri.go:89] found id: "24a75ddccb641f284753e265035d0ec049f86894b9a8bb4c8eb68267f2a6bbd3"
	I1019 13:16:30.001667  488736 cri.go:89] found id: "b649715b02d1cdf3f028d00c9f1eda59d4501cabfe3bf7e05ad588e094515f85"
	I1019 13:16:30.001740  488736 cri.go:89] found id: "69cd340c87d966c00eb54338c8930e6a5166ffc684c24d32e2f7db4bde1a9182"
	I1019 13:16:30.001765  488736 cri.go:89] found id: "e89abbf84a6ae8fad71347e209ff96a8ac6de8edccf16176b6ff8c53cdf3116b"
	I1019 13:16:30.001786  488736 cri.go:89] found id: "75b1666aca773065101164715baec4b2ea6e97910e9b1b816056fe57b3894d8b"
	I1019 13:16:30.001806  488736 cri.go:89] found id: ""
	I1019 13:16:30.001891  488736 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:16:30.033905  488736 retry.go:31] will retry after 390.116552ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:16:30Z" level=error msg="open /run/runc: no such file or directory"
	I1019 13:16:30.424598  488736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:16:30.437624  488736 pause.go:52] kubelet running: false
	I1019 13:16:30.437731  488736 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:16:30.599824  488736 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:16:30.599984  488736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:16:30.674852  488736 cri.go:89] found id: "6643d7449e536bebc7c48cb509e939d206eca0e67efbecc9a49f6f230d6a8f2e"
	I1019 13:16:30.674884  488736 cri.go:89] found id: "35a081210b7fa08acbe3227adf5610734dfa60738cda733fc91359b203bcf29b"
	I1019 13:16:30.674890  488736 cri.go:89] found id: "3602ce3b8d0b42a07e319435c2d257a4f4c245eb0405e0ad593bf94803f45907"
	I1019 13:16:30.674893  488736 cri.go:89] found id: "d7af5087f11ac0a282a7c09f5c3f2ad9affeab8823717f75f713a854c8124884"
	I1019 13:16:30.674896  488736 cri.go:89] found id: "f06654b2d2683ec240f70fa86e309b5a103311a29fb5afb2f214482a14902133"
	I1019 13:16:30.674925  488736 cri.go:89] found id: "0452bd1f37844e20d71713464f7c02412906aa5aeab0336266163b06aba35d56"
	I1019 13:16:30.674934  488736 cri.go:89] found id: "24a75ddccb641f284753e265035d0ec049f86894b9a8bb4c8eb68267f2a6bbd3"
	I1019 13:16:30.674938  488736 cri.go:89] found id: "b649715b02d1cdf3f028d00c9f1eda59d4501cabfe3bf7e05ad588e094515f85"
	I1019 13:16:30.674941  488736 cri.go:89] found id: "69cd340c87d966c00eb54338c8930e6a5166ffc684c24d32e2f7db4bde1a9182"
	I1019 13:16:30.674946  488736 cri.go:89] found id: "e89abbf84a6ae8fad71347e209ff96a8ac6de8edccf16176b6ff8c53cdf3116b"
	I1019 13:16:30.674950  488736 cri.go:89] found id: "75b1666aca773065101164715baec4b2ea6e97910e9b1b816056fe57b3894d8b"
	I1019 13:16:30.674953  488736 cri.go:89] found id: ""
	I1019 13:16:30.675014  488736 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:16:30.689341  488736 out.go:203] 
	W1019 13:16:30.692278  488736 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:16:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:16:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 13:16:30.692303  488736 out.go:285] * 
	* 
	W1019 13:16:30.699433  488736 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 13:16:30.702453  488736 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-108149 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-108149
helpers_test.go:243: (dbg) docker inspect no-preload-108149:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0",
	        "Created": "2025-10-19T13:13:42.966864471Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 482890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T13:15:15.996140266Z",
	            "FinishedAt": "2025-10-19T13:15:15.126243591Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0/hostname",
	        "HostsPath": "/var/lib/docker/containers/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0/hosts",
	        "LogPath": "/var/lib/docker/containers/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0-json.log",
	        "Name": "/no-preload-108149",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-108149:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-108149",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0",
	                "LowerDir": "/var/lib/docker/overlay2/ca33adf3602bb1f3e90dd2bca8f00da7d19763fa3c96fba2f19c6b9ace8c8b7b-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ca33adf3602bb1f3e90dd2bca8f00da7d19763fa3c96fba2f19c6b9ace8c8b7b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ca33adf3602bb1f3e90dd2bca8f00da7d19763fa3c96fba2f19c6b9ace8c8b7b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ca33adf3602bb1f3e90dd2bca8f00da7d19763fa3c96fba2f19c6b9ace8c8b7b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-108149",
	                "Source": "/var/lib/docker/volumes/no-preload-108149/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-108149",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-108149",
	                "name.minikube.sigs.k8s.io": "no-preload-108149",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f9e1e1a5e674528b28986c495abb864248ebbfb26d7dd8d3c64b6959fa218ce3",
	            "SandboxKey": "/var/run/docker/netns/f9e1e1a5e674",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-108149": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:ee:0a:2b:21:9c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "02fa40d5a7624754fb29434a70126850295cfdc9e5c6d2dc3c5e97dc6c14e8ed",
	                    "EndpointID": "d091971df39baec63a66d8c438a14ddce8f775d545a415c7dcb4bc72a88cdb7e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-108149",
	                        "4857474c82b9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-108149 -n no-preload-108149
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-108149 -n no-preload-108149: exit status 2 (361.714143ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-108149 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-108149 logs -n 25: (1.342781319s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ force-systemd-flag-606072 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-606072 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ delete  │ -p force-systemd-flag-606072                                                                                                                                                                                                                  │ force-systemd-flag-606072 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ start   │ -p cert-options-264135 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:12 UTC │
	│ ssh     │ cert-options-264135 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ ssh     │ -p cert-options-264135 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ delete  │ -p cert-options-264135                                                                                                                                                                                                                        │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ start   │ -p old-k8s-version-842494 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:13 UTC │
	│ start   │ -p cert-expiration-088393 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-088393    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:13 UTC │
	│ delete  │ -p cert-expiration-088393                                                                                                                                                                                                                     │ cert-expiration-088393    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:13 UTC │
	│ start   │ -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-842494 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │                     │
	│ stop    │ -p old-k8s-version-842494 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-842494 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:14 UTC │ 19 Oct 25 13:14 UTC │
	│ start   │ -p old-k8s-version-842494 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:14 UTC │ 19 Oct 25 13:15 UTC │
	│ addons  │ enable metrics-server -p no-preload-108149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ stop    │ -p no-preload-108149 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ addons  │ enable dashboard -p no-preload-108149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ start   │ -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:16 UTC │
	│ image   │ old-k8s-version-842494 image list --format=json                                                                                                                                                                                               │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ pause   │ -p old-k8s-version-842494 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ delete  │ -p old-k8s-version-842494                                                                                                                                                                                                                     │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ delete  │ -p old-k8s-version-842494                                                                                                                                                                                                                     │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ start   │ -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834340        │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ image   │ no-preload-108149 image list --format=json                                                                                                                                                                                                    │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ pause   │ -p no-preload-108149 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 13:15:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 13:15:30.752153  485611 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:15:30.752274  485611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:15:30.752279  485611 out.go:374] Setting ErrFile to fd 2...
	I1019 13:15:30.752284  485611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:15:30.752547  485611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:15:30.752947  485611 out.go:368] Setting JSON to false
	I1019 13:15:30.754243  485611 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10681,"bootTime":1760869050,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 13:15:30.754316  485611 start.go:141] virtualization:  
	I1019 13:15:30.758374  485611 out.go:179] * [embed-certs-834340] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 13:15:30.761867  485611 notify.go:220] Checking for updates...
	I1019 13:15:30.761834  485611 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:15:30.765621  485611 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:15:30.768796  485611 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:15:30.771931  485611 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 13:15:30.774941  485611 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 13:15:30.778128  485611 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:15:30.781610  485611 config.go:182] Loaded profile config "no-preload-108149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:15:30.781719  485611 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:15:30.827365  485611 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 13:15:30.827494  485611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:15:30.929881  485611 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 13:15:30.92028979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:15:30.929986  485611 docker.go:318] overlay module found
	I1019 13:15:30.933282  485611 out.go:179] * Using the docker driver based on user configuration
	I1019 13:15:30.936285  485611 start.go:305] selected driver: docker
	I1019 13:15:30.936310  485611 start.go:925] validating driver "docker" against <nil>
	I1019 13:15:30.936325  485611 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:15:30.937188  485611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:15:31.046805  485611 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 13:15:31.035241044 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:15:31.046997  485611 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 13:15:31.047228  485611 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:15:31.050265  485611 out.go:179] * Using Docker driver with root privileges
	I1019 13:15:31.053099  485611 cni.go:84] Creating CNI manager for ""
	I1019 13:15:31.053173  485611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:15:31.053188  485611 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 13:15:31.053277  485611 start.go:349] cluster config:
	{Name:embed-certs-834340 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-834340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:15:31.058344  485611 out.go:179] * Starting "embed-certs-834340" primary control-plane node in "embed-certs-834340" cluster
	I1019 13:15:31.061212  485611 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 13:15:31.064148  485611 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 13:15:31.066955  485611 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:15:31.067016  485611 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 13:15:31.067030  485611 cache.go:58] Caching tarball of preloaded images
	I1019 13:15:31.067139  485611 preload.go:233] Found /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 13:15:31.067155  485611 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 13:15:31.067264  485611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/config.json ...
	I1019 13:15:31.067288  485611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/config.json: {Name:mkf044743046292d05dcaa840723539dd448573b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:15:31.067464  485611 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 13:15:31.096808  485611 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 13:15:31.096835  485611 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 13:15:31.096848  485611 cache.go:232] Successfully downloaded all kic artifacts
	I1019 13:15:31.096871  485611 start.go:360] acquireMachinesLock for embed-certs-834340: {Name:mka158a8ff4f9c1986944dd404295df0d84afabc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:31.096982  485611 start.go:364] duration metric: took 89.831µs to acquireMachinesLock for "embed-certs-834340"
	I1019 13:15:31.097013  485611 start.go:93] Provisioning new machine with config: &{Name:embed-certs-834340 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-834340 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:15:31.097093  485611 start.go:125] createHost starting for "" (driver="docker")
	I1019 13:15:32.417463  482757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.163997529s)
	I1019 13:15:34.763049  482757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.487791173s)
	I1019 13:15:34.763096  482757 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.472351196s)
	I1019 13:15:34.763133  482757 node_ready.go:35] waiting up to 6m0s for node "no-preload-108149" to be "Ready" ...
	I1019 13:15:34.763433  482757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.932582008s)
	I1019 13:15:34.766570  482757 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-108149 addons enable metrics-server
	
	I1019 13:15:34.769594  482757 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1019 13:15:34.772539  482757 addons.go:514] duration metric: took 9.996248096s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1019 13:15:34.774931  482757 node_ready.go:49] node "no-preload-108149" is "Ready"
	I1019 13:15:34.774999  482757 node_ready.go:38] duration metric: took 11.853033ms for node "no-preload-108149" to be "Ready" ...
	I1019 13:15:34.775039  482757 api_server.go:52] waiting for apiserver process to appear ...
	I1019 13:15:34.775133  482757 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 13:15:34.795761  482757 api_server.go:72] duration metric: took 10.018569627s to wait for apiserver process to appear ...
	I1019 13:15:34.795785  482757 api_server.go:88] waiting for apiserver healthz status ...
	I1019 13:15:34.795805  482757 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 13:15:34.806028  482757 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 13:15:34.807285  482757 api_server.go:141] control plane version: v1.34.1
	I1019 13:15:34.807348  482757 api_server.go:131] duration metric: took 11.554682ms to wait for apiserver health ...
	I1019 13:15:34.807372  482757 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 13:15:34.813657  482757 system_pods.go:59] 8 kube-system pods found
	I1019 13:15:34.813749  482757 system_pods.go:61] "coredns-66bc5c9577-qp7k5" [0f0731c8-758f-4a89-9d62-19ff52f8d9ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:15:34.813776  482757 system_pods.go:61] "etcd-no-preload-108149" [288fa476-5552-477a-8958-75fb017c1f15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:15:34.813814  482757 system_pods.go:61] "kindnet-s5wgc" [eecfcd8e-961b-4469-8bab-a15f4053fcae] Running
	I1019 13:15:34.813842  482757 system_pods.go:61] "kube-apiserver-no-preload-108149" [7fc22236-bfa6-43f2-888e-899c1802dccf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:15:34.813865  482757 system_pods.go:61] "kube-controller-manager-no-preload-108149" [589ab894-5b6a-4901-ae64-033a1841821c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 13:15:34.813903  482757 system_pods.go:61] "kube-proxy-qfr27" [12f5f5aa-7552-44bc-9a49-879a274e9a57] Running
	I1019 13:15:34.813931  482757 system_pods.go:61] "kube-scheduler-no-preload-108149" [fd497e0f-9bce-4bda-850f-ddc249fc05c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 13:15:34.813953  482757 system_pods.go:61] "storage-provisioner" [7de7f3d6-6098-48a3-966a-f0a82622bdeb] Running
	I1019 13:15:34.813991  482757 system_pods.go:74] duration metric: took 6.598201ms to wait for pod list to return data ...
	I1019 13:15:34.814018  482757 default_sa.go:34] waiting for default service account to be created ...
	I1019 13:15:34.818593  482757 default_sa.go:45] found service account: "default"
	I1019 13:15:34.818667  482757 default_sa.go:55] duration metric: took 4.626212ms for default service account to be created ...
	I1019 13:15:34.818691  482757 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 13:15:34.822386  482757 system_pods.go:86] 8 kube-system pods found
	I1019 13:15:34.822465  482757 system_pods.go:89] "coredns-66bc5c9577-qp7k5" [0f0731c8-758f-4a89-9d62-19ff52f8d9ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:15:34.822491  482757 system_pods.go:89] "etcd-no-preload-108149" [288fa476-5552-477a-8958-75fb017c1f15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:15:34.822514  482757 system_pods.go:89] "kindnet-s5wgc" [eecfcd8e-961b-4469-8bab-a15f4053fcae] Running
	I1019 13:15:34.822555  482757 system_pods.go:89] "kube-apiserver-no-preload-108149" [7fc22236-bfa6-43f2-888e-899c1802dccf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:15:34.822576  482757 system_pods.go:89] "kube-controller-manager-no-preload-108149" [589ab894-5b6a-4901-ae64-033a1841821c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 13:15:34.822596  482757 system_pods.go:89] "kube-proxy-qfr27" [12f5f5aa-7552-44bc-9a49-879a274e9a57] Running
	I1019 13:15:34.822635  482757 system_pods.go:89] "kube-scheduler-no-preload-108149" [fd497e0f-9bce-4bda-850f-ddc249fc05c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 13:15:34.822653  482757 system_pods.go:89] "storage-provisioner" [7de7f3d6-6098-48a3-966a-f0a82622bdeb] Running
	I1019 13:15:34.822689  482757 system_pods.go:126] duration metric: took 3.964607ms to wait for k8s-apps to be running ...
	I1019 13:15:34.822713  482757 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 13:15:34.822797  482757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:15:34.839526  482757 system_svc.go:56] duration metric: took 16.803975ms WaitForService to wait for kubelet
	I1019 13:15:34.839604  482757 kubeadm.go:586] duration metric: took 10.062417338s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:15:34.839650  482757 node_conditions.go:102] verifying NodePressure condition ...
	I1019 13:15:34.847859  482757 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 13:15:34.847949  482757 node_conditions.go:123] node cpu capacity is 2
	I1019 13:15:34.847977  482757 node_conditions.go:105] duration metric: took 8.293929ms to run NodePressure ...
	I1019 13:15:34.848003  482757 start.go:241] waiting for startup goroutines ...
	I1019 13:15:34.848034  482757 start.go:246] waiting for cluster config update ...
	I1019 13:15:34.848073  482757 start.go:255] writing updated cluster config ...
	I1019 13:15:34.848418  482757 ssh_runner.go:195] Run: rm -f paused
	I1019 13:15:34.858155  482757 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:15:34.862088  482757 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qp7k5" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:15:31.100592  485611 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 13:15:31.100830  485611 start.go:159] libmachine.API.Create for "embed-certs-834340" (driver="docker")
	I1019 13:15:31.100877  485611 client.go:168] LocalClient.Create starting
	I1019 13:15:31.100943  485611 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem
	I1019 13:15:31.100992  485611 main.go:141] libmachine: Decoding PEM data...
	I1019 13:15:31.101011  485611 main.go:141] libmachine: Parsing certificate...
	I1019 13:15:31.101072  485611 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem
	I1019 13:15:31.101096  485611 main.go:141] libmachine: Decoding PEM data...
	I1019 13:15:31.101110  485611 main.go:141] libmachine: Parsing certificate...
	I1019 13:15:31.101487  485611 cli_runner.go:164] Run: docker network inspect embed-certs-834340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 13:15:31.137783  485611 cli_runner.go:211] docker network inspect embed-certs-834340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 13:15:31.137891  485611 network_create.go:284] running [docker network inspect embed-certs-834340] to gather additional debugging logs...
	I1019 13:15:31.137915  485611 cli_runner.go:164] Run: docker network inspect embed-certs-834340
	W1019 13:15:31.155705  485611 cli_runner.go:211] docker network inspect embed-certs-834340 returned with exit code 1
	I1019 13:15:31.155739  485611 network_create.go:287] error running [docker network inspect embed-certs-834340]: docker network inspect embed-certs-834340: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-834340 not found
	I1019 13:15:31.155755  485611 network_create.go:289] output of [docker network inspect embed-certs-834340]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-834340 not found
	
	** /stderr **
	I1019 13:15:31.155889  485611 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:15:31.173929  485611 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-319c97358c5c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2a:99:c3:44:12:51} reservation:<nil>}
	I1019 13:15:31.174199  485611 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5c09b33e0936 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:93:4b:f6:fd:1c} reservation:<nil>}
	I1019 13:15:31.174513  485611 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2c2bbaadd4a8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:8f:96:27:48:2c} reservation:<nil>}
	I1019 13:15:31.174817  485611 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-02fa40d5a762 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:3b:ad:6d:17:1b} reservation:<nil>}
	I1019 13:15:31.175224  485611 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f1320}
	I1019 13:15:31.175241  485611 network_create.go:124] attempt to create docker network embed-certs-834340 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1019 13:15:31.175299  485611 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-834340 embed-certs-834340
	I1019 13:15:31.238899  485611 network_create.go:108] docker network embed-certs-834340 192.168.85.0/24 created
	I1019 13:15:31.238928  485611 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-834340" container
	I1019 13:15:31.239013  485611 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 13:15:31.265841  485611 cli_runner.go:164] Run: docker volume create embed-certs-834340 --label name.minikube.sigs.k8s.io=embed-certs-834340 --label created_by.minikube.sigs.k8s.io=true
	I1019 13:15:31.294108  485611 oci.go:103] Successfully created a docker volume embed-certs-834340
	I1019 13:15:31.294208  485611 cli_runner.go:164] Run: docker run --rm --name embed-certs-834340-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-834340 --entrypoint /usr/bin/test -v embed-certs-834340:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 13:15:32.013273  485611 oci.go:107] Successfully prepared a docker volume embed-certs-834340
	I1019 13:15:32.013351  485611 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:15:32.013376  485611 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 13:15:32.013455  485611 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-834340:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1019 13:15:36.868283  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	W1019 13:15:38.881873  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	I1019 13:15:37.787570  485611 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-834340:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.774071472s)
	I1019 13:15:37.787600  485611 kic.go:203] duration metric: took 5.774220438s to extract preloaded images to volume ...
	W1019 13:15:37.787739  485611 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 13:15:37.787838  485611 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 13:15:37.869837  485611 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-834340 --name embed-certs-834340 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-834340 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-834340 --network embed-certs-834340 --ip 192.168.85.2 --volume embed-certs-834340:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 13:15:38.233152  485611 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Running}}
	I1019 13:15:38.255987  485611 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:15:38.281586  485611 cli_runner.go:164] Run: docker exec embed-certs-834340 stat /var/lib/dpkg/alternatives/iptables
	I1019 13:15:38.368772  485611 oci.go:144] the created container "embed-certs-834340" has a running status.
	I1019 13:15:38.368806  485611 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa...
	I1019 13:15:38.873953  485611 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 13:15:38.904296  485611 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:15:38.935449  485611 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 13:15:38.935469  485611 kic_runner.go:114] Args: [docker exec --privileged embed-certs-834340 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 13:15:39.027042  485611 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:15:39.054313  485611 machine.go:93] provisionDockerMachine start ...
	I1019 13:15:39.054396  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:15:39.087401  485611 main.go:141] libmachine: Using SSH client type: native
	I1019 13:15:39.087785  485611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1019 13:15:39.087805  485611 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 13:15:39.088436  485611 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55330->127.0.0.1:33438: read: connection reset by peer
	W1019 13:15:41.375929  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	W1019 13:15:43.887675  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	I1019 13:15:42.270174  485611 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-834340
	
	I1019 13:15:42.270200  485611 ubuntu.go:182] provisioning hostname "embed-certs-834340"
	I1019 13:15:42.270317  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:15:42.304756  485611 main.go:141] libmachine: Using SSH client type: native
	I1019 13:15:42.305079  485611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1019 13:15:42.305096  485611 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-834340 && echo "embed-certs-834340" | sudo tee /etc/hostname
	I1019 13:15:42.487800  485611 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-834340
	
	I1019 13:15:42.487937  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:15:42.514794  485611 main.go:141] libmachine: Using SSH client type: native
	I1019 13:15:42.515135  485611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1019 13:15:42.515152  485611 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-834340' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-834340/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-834340' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 13:15:42.666439  485611 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 13:15:42.666525  485611 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 13:15:42.666561  485611 ubuntu.go:190] setting up certificates
	I1019 13:15:42.666584  485611 provision.go:84] configureAuth start
	I1019 13:15:42.666677  485611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-834340
	I1019 13:15:42.705847  485611 provision.go:143] copyHostCerts
	I1019 13:15:42.705925  485611 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem, removing ...
	I1019 13:15:42.705934  485611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem
	I1019 13:15:42.706003  485611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 13:15:42.706086  485611 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem, removing ...
	I1019 13:15:42.706102  485611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem
	I1019 13:15:42.706130  485611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 13:15:42.706208  485611 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem, removing ...
	I1019 13:15:42.706220  485611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem
	I1019 13:15:42.706247  485611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 13:15:42.706315  485611 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.embed-certs-834340 san=[127.0.0.1 192.168.85.2 embed-certs-834340 localhost minikube]
	I1019 13:15:43.156249  485611 provision.go:177] copyRemoteCerts
	I1019 13:15:43.156363  485611 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 13:15:43.156427  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:15:43.174572  485611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:15:43.278859  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 13:15:43.300911  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 13:15:43.326576  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1019 13:15:43.352381  485611 provision.go:87] duration metric: took 685.759035ms to configureAuth
	I1019 13:15:43.352414  485611 ubuntu.go:206] setting minikube options for container-runtime
	I1019 13:15:43.352730  485611 config.go:182] Loaded profile config "embed-certs-834340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:15:43.352908  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:15:43.379173  485611 main.go:141] libmachine: Using SSH client type: native
	I1019 13:15:43.379505  485611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1019 13:15:43.379528  485611 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 13:15:43.813644  485611 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 13:15:43.813668  485611 machine.go:96] duration metric: took 4.759335125s to provisionDockerMachine
	I1019 13:15:43.813692  485611 client.go:171] duration metric: took 12.712790241s to LocalClient.Create
	I1019 13:15:43.813725  485611 start.go:167] duration metric: took 12.712896639s to libmachine.API.Create "embed-certs-834340"
	I1019 13:15:43.813738  485611 start.go:293] postStartSetup for "embed-certs-834340" (driver="docker")
	I1019 13:15:43.813750  485611 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 13:15:43.813834  485611 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 13:15:43.813893  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:15:43.853764  485611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:15:43.975754  485611 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 13:15:43.979773  485611 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 13:15:43.979807  485611 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 13:15:43.979819  485611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/addons for local assets ...
	I1019 13:15:43.979879  485611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/files for local assets ...
	I1019 13:15:43.979971  485611 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem -> 2945182.pem in /etc/ssl/certs
	I1019 13:15:43.980084  485611 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 13:15:43.990366  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:15:44.026850  485611 start.go:296] duration metric: took 213.095418ms for postStartSetup
	I1019 13:15:44.027265  485611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-834340
	I1019 13:15:44.054045  485611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/config.json ...
	I1019 13:15:44.054331  485611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 13:15:44.054392  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:15:44.079758  485611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:15:44.193132  485611 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 13:15:44.198727  485611 start.go:128] duration metric: took 13.101618335s to createHost
	I1019 13:15:44.198749  485611 start.go:83] releasing machines lock for "embed-certs-834340", held for 13.10175117s
	I1019 13:15:44.198819  485611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-834340
	I1019 13:15:44.230748  485611 ssh_runner.go:195] Run: cat /version.json
	I1019 13:15:44.230799  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:15:44.231038  485611 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 13:15:44.231104  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:15:44.262295  485611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:15:44.278464  485611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:15:44.410320  485611 ssh_runner.go:195] Run: systemctl --version
	I1019 13:15:44.515307  485611 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 13:15:44.598169  485611 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 13:15:44.607951  485611 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 13:15:44.608027  485611 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 13:15:44.645950  485611 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 13:15:44.645976  485611 start.go:495] detecting cgroup driver to use...
	I1019 13:15:44.646006  485611 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 13:15:44.646063  485611 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 13:15:44.671924  485611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 13:15:44.695262  485611 docker.go:218] disabling cri-docker service (if available) ...
	I1019 13:15:44.695329  485611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 13:15:44.719033  485611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 13:15:44.748004  485611 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 13:15:44.931648  485611 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 13:15:45.186177  485611 docker.go:234] disabling docker service ...
	I1019 13:15:45.186333  485611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 13:15:45.238422  485611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 13:15:45.261985  485611 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 13:15:45.483407  485611 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 13:15:45.662095  485611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 13:15:45.677877  485611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 13:15:45.694861  485611 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 13:15:45.694929  485611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:45.705250  485611 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 13:15:45.705321  485611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:45.717192  485611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:45.727649  485611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:45.737493  485611 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 13:15:45.746952  485611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:45.758879  485611 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:45.774911  485611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:45.786138  485611 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 13:15:45.795280  485611 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 13:15:45.815974  485611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:15:45.976264  485611 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 13:15:46.662235  485611 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 13:15:46.662342  485611 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 13:15:46.668459  485611 start.go:563] Will wait 60s for crictl version
	I1019 13:15:46.668559  485611 ssh_runner.go:195] Run: which crictl
	I1019 13:15:46.672663  485611 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 13:15:46.720764  485611 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 13:15:46.720881  485611 ssh_runner.go:195] Run: crio --version
	I1019 13:15:46.759144  485611 ssh_runner.go:195] Run: crio --version
	I1019 13:15:46.800654  485611 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 13:15:46.803912  485611 cli_runner.go:164] Run: docker network inspect embed-certs-834340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:15:46.829336  485611 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 13:15:46.833973  485611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:15:46.847307  485611 kubeadm.go:883] updating cluster {Name:embed-certs-834340 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-834340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 13:15:46.847433  485611 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:15:46.847493  485611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:15:46.903960  485611 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:15:46.903986  485611 crio.go:433] Images already preloaded, skipping extraction
	I1019 13:15:46.904040  485611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:15:46.931965  485611 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:15:46.931990  485611 cache_images.go:85] Images are preloaded, skipping loading
	I1019 13:15:46.931999  485611 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1019 13:15:46.932096  485611 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-834340 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-834340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 13:15:46.932176  485611 ssh_runner.go:195] Run: crio config
	I1019 13:15:47.007261  485611 cni.go:84] Creating CNI manager for ""
	I1019 13:15:47.007283  485611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:15:47.007333  485611 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 13:15:47.007364  485611 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-834340 NodeName:embed-certs-834340 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 13:15:47.007539  485611 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-834340"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 13:15:47.007627  485611 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 13:15:47.016474  485611 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 13:15:47.016548  485611 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 13:15:47.023902  485611 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1019 13:15:47.038063  485611 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 13:15:47.064478  485611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1019 13:15:47.082948  485611 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 13:15:47.087685  485611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:15:47.102886  485611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:15:47.263124  485611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:15:47.280687  485611 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340 for IP: 192.168.85.2
	I1019 13:15:47.280708  485611 certs.go:195] generating shared ca certs ...
	I1019 13:15:47.280725  485611 certs.go:227] acquiring lock for ca certs: {Name:mk8f2f1c683cf5104ef70f6f3d59bf8f6240d633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:15:47.280924  485611 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key
	I1019 13:15:47.280990  485611 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key
	I1019 13:15:47.281000  485611 certs.go:257] generating profile certs ...
	I1019 13:15:47.281080  485611 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/client.key
	I1019 13:15:47.281105  485611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/client.crt with IP's: []
	I1019 13:15:47.685607  485611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/client.crt ...
	I1019 13:15:47.685638  485611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/client.crt: {Name:mkbccb838549f1f87cffd774a53342e8ce836583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:15:47.685903  485611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/client.key ...
	I1019 13:15:47.685919  485611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/client.key: {Name:mk128314207ecf3cca665d607d4437ed612a47b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:15:47.686056  485611 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.key.21a79282
	I1019 13:15:47.686077  485611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.crt.21a79282 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1019 13:15:47.991490  485611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.crt.21a79282 ...
	I1019 13:15:47.991521  485611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.crt.21a79282: {Name:mk93b731584176dc4e3c875d0a3b0188cf141876 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:15:47.991734  485611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.key.21a79282 ...
	I1019 13:15:47.991751  485611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.key.21a79282: {Name:mkba25b2a83a02a27c18099aba13059d2c30977c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:15:47.991879  485611 certs.go:382] copying /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.crt.21a79282 -> /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.crt
	I1019 13:15:47.991980  485611 certs.go:386] copying /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.key.21a79282 -> /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.key
	I1019 13:15:47.992087  485611 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.key
	I1019 13:15:47.992134  485611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.crt with IP's: []
	I1019 13:15:48.644637  485611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.crt ...
	I1019 13:15:48.644665  485611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.crt: {Name:mk7e5a30f28b9e40781d76302eb1284fbe3bc598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:15:48.644809  485611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.key ...
	I1019 13:15:48.644824  485611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.key: {Name:mk68ae87e8be948598ff73e2bc8a6efe6a002635 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:15:48.644997  485611 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem (1338 bytes)
	W1019 13:15:48.645039  485611 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518_empty.pem, impossibly tiny 0 bytes
	I1019 13:15:48.645053  485611 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 13:15:48.645077  485611 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem (1082 bytes)
	I1019 13:15:48.645104  485611 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem (1123 bytes)
	I1019 13:15:48.645130  485611 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem (1679 bytes)
	I1019 13:15:48.645179  485611 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:15:48.645761  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 13:15:48.669549  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 13:15:48.699884  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 13:15:48.724134  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 13:15:48.759051  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1019 13:15:48.780782  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 13:15:48.798754  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 13:15:48.817716  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 13:15:48.835742  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 13:15:48.854172  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem --> /usr/share/ca-certificates/294518.pem (1338 bytes)
	I1019 13:15:48.882119  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /usr/share/ca-certificates/2945182.pem (1708 bytes)
	I1019 13:15:48.901201  485611 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 13:15:48.914760  485611 ssh_runner.go:195] Run: openssl version
	I1019 13:15:48.922206  485611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 13:15:48.931171  485611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:15:48.935927  485611 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:15:48.936025  485611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:15:48.978635  485611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 13:15:48.987536  485611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294518.pem && ln -fs /usr/share/ca-certificates/294518.pem /etc/ssl/certs/294518.pem"
	I1019 13:15:48.995940  485611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294518.pem
	I1019 13:15:49.000438  485611 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:20 /usr/share/ca-certificates/294518.pem
	I1019 13:15:49.000537  485611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294518.pem
	I1019 13:15:49.046117  485611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294518.pem /etc/ssl/certs/51391683.0"
	I1019 13:15:49.057791  485611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2945182.pem && ln -fs /usr/share/ca-certificates/2945182.pem /etc/ssl/certs/2945182.pem"
	I1019 13:15:49.071981  485611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2945182.pem
	I1019 13:15:49.076233  485611 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:20 /usr/share/ca-certificates/2945182.pem
	I1019 13:15:49.076341  485611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2945182.pem
	I1019 13:15:49.124014  485611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2945182.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 13:15:49.133163  485611 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 13:15:49.137851  485611 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 13:15:49.137962  485611 kubeadm.go:400] StartCluster: {Name:embed-certs-834340 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-834340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:15:49.138056  485611 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 13:15:49.138139  485611 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 13:15:49.172269  485611 cri.go:89] found id: ""
	I1019 13:15:49.172389  485611 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 13:15:49.191898  485611 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 13:15:49.199354  485611 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1019 13:15:49.199446  485611 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 13:15:49.215213  485611 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 13:15:49.215233  485611 kubeadm.go:157] found existing configuration files:
	
	I1019 13:15:49.215316  485611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 13:15:49.235195  485611 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 13:15:49.235296  485611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 13:15:49.250094  485611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 13:15:49.269745  485611 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 13:15:49.269840  485611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 13:15:49.279256  485611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 13:15:49.287102  485611 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 13:15:49.287204  485611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 13:15:49.294438  485611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 13:15:49.304934  485611 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 13:15:49.304995  485611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 13:15:49.313098  485611 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 13:15:49.361726  485611 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1019 13:15:49.362100  485611 kubeadm.go:318] [preflight] Running pre-flight checks
	I1019 13:15:49.402083  485611 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1019 13:15:49.402156  485611 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 13:15:49.402193  485611 kubeadm.go:318] OS: Linux
	I1019 13:15:49.402241  485611 kubeadm.go:318] CGROUPS_CPU: enabled
	I1019 13:15:49.402292  485611 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1019 13:15:49.402342  485611 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1019 13:15:49.402392  485611 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1019 13:15:49.402443  485611 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1019 13:15:49.402499  485611 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1019 13:15:49.402547  485611 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1019 13:15:49.402597  485611 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1019 13:15:49.402645  485611 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1019 13:15:49.493062  485611 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 13:15:49.493230  485611 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 13:15:49.493351  485611 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 13:15:49.501160  485611 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1019 13:15:46.368681  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	W1019 13:15:48.369001  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	I1019 13:15:49.506747  485611 out.go:252]   - Generating certificates and keys ...
	I1019 13:15:49.506907  485611 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1019 13:15:49.507018  485611 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1019 13:15:50.152484  485611 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 13:15:50.698709  485611 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	W1019 13:15:50.869429  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	W1019 13:15:52.870337  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	W1019 13:15:54.871338  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	I1019 13:15:51.147081  485611 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1019 13:15:51.298849  485611 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1019 13:15:52.311108  485611 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1019 13:15:52.311467  485611 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-834340 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 13:15:52.673200  485611 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1019 13:15:52.673525  485611 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-834340 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 13:15:52.743627  485611 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 13:15:52.970401  485611 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 13:15:53.628887  485611 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1019 13:15:53.629159  485611 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 13:15:54.808810  485611 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 13:15:55.246139  485611 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 13:15:55.646238  485611 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 13:15:55.929305  485611 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 13:15:56.070194  485611 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 13:15:56.070730  485611 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 13:15:56.075468  485611 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1019 13:15:57.369185  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	W1019 13:15:59.868742  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	I1019 13:15:56.078897  485611 out.go:252]   - Booting up control plane ...
	I1019 13:15:56.079004  485611 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 13:15:56.079086  485611 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 13:15:56.079828  485611 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 13:15:56.105296  485611 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 13:15:56.105420  485611 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 13:15:56.113477  485611 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 13:15:56.113866  485611 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 13:15:56.113916  485611 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1019 13:15:56.244599  485611 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 13:15:56.244725  485611 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 13:15:58.246317  485611 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.00181996s
	I1019 13:15:58.249813  485611 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 13:15:58.249913  485611 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1019 13:15:58.250286  485611 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 13:15:58.250383  485611 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1019 13:16:01.869124  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	W1019 13:16:03.869407  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	I1019 13:16:02.728134  485611 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.477808215s
	I1019 13:16:04.217697  485611 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.967885333s
	I1019 13:16:05.752625  485611 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.502456819s
	I1019 13:16:05.779253  485611 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 13:16:05.795422  485611 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 13:16:05.819432  485611 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 13:16:05.819794  485611 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-834340 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 13:16:05.833484  485611 kubeadm.go:318] [bootstrap-token] Using token: upnd0k.hdge3z3mcruoqygz
	I1019 13:16:05.836375  485611 out.go:252]   - Configuring RBAC rules ...
	I1019 13:16:05.836508  485611 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 13:16:05.840975  485611 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 13:16:05.850941  485611 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 13:16:05.855240  485611 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 13:16:05.861656  485611 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 13:16:05.872684  485611 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 13:16:06.163281  485611 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 13:16:06.614161  485611 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1019 13:16:07.160392  485611 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1019 13:16:07.161843  485611 kubeadm.go:318] 
	I1019 13:16:07.161923  485611 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1019 13:16:07.161929  485611 kubeadm.go:318] 
	I1019 13:16:07.162010  485611 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1019 13:16:07.162015  485611 kubeadm.go:318] 
	I1019 13:16:07.162041  485611 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1019 13:16:07.162109  485611 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 13:16:07.162162  485611 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 13:16:07.162166  485611 kubeadm.go:318] 
	I1019 13:16:07.162222  485611 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1019 13:16:07.162227  485611 kubeadm.go:318] 
	I1019 13:16:07.162277  485611 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 13:16:07.162282  485611 kubeadm.go:318] 
	I1019 13:16:07.162336  485611 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1019 13:16:07.162415  485611 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 13:16:07.162486  485611 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 13:16:07.162493  485611 kubeadm.go:318] 
	I1019 13:16:07.162581  485611 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 13:16:07.162662  485611 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1019 13:16:07.162667  485611 kubeadm.go:318] 
	I1019 13:16:07.162755  485611 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token upnd0k.hdge3z3mcruoqygz \
	I1019 13:16:07.162863  485611 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0ee0bbb0fbfe8419c71683408bd38502dbf921f3cb179cb0365daeb92f967309 \
	I1019 13:16:07.162885  485611 kubeadm.go:318] 	--control-plane 
	I1019 13:16:07.162889  485611 kubeadm.go:318] 
	I1019 13:16:07.162978  485611 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1019 13:16:07.162982  485611 kubeadm.go:318] 
	I1019 13:16:07.163067  485611 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token upnd0k.hdge3z3mcruoqygz \
	I1019 13:16:07.163200  485611 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0ee0bbb0fbfe8419c71683408bd38502dbf921f3cb179cb0365daeb92f967309 
	I1019 13:16:07.166668  485611 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1019 13:16:07.166910  485611 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1019 13:16:07.167026  485611 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 13:16:07.167046  485611 cni.go:84] Creating CNI manager for ""
	I1019 13:16:07.167057  485611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:16:07.172184  485611 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1019 13:16:06.370227  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	W1019 13:16:08.868219  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	I1019 13:16:07.175123  485611 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 13:16:07.181463  485611 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 13:16:07.181486  485611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 13:16:07.205501  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 13:16:07.648571  485611 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 13:16:07.648639  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:07.648715  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-834340 minikube.k8s.io/updated_at=2025_10_19T13_16_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99 minikube.k8s.io/name=embed-certs-834340 minikube.k8s.io/primary=true
	I1019 13:16:07.838122  485611 ops.go:34] apiserver oom_adj: -16
	I1019 13:16:07.838231  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:08.338839  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:08.838862  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:09.339202  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:09.838879  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:10.339157  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:10.838612  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:11.339187  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:11.838463  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:12.339246  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:12.493646  485611 kubeadm.go:1113] duration metric: took 4.845059862s to wait for elevateKubeSystemPrivileges
	I1019 13:16:12.493724  485611 kubeadm.go:402] duration metric: took 23.35578878s to StartCluster
	I1019 13:16:12.493744  485611 settings.go:142] acquiring lock: {Name:mk1099ab6cbf86eca031b5f8e2b43952c9c0f84f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:16:12.493807  485611 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:16:12.495250  485611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:16:12.495480  485611 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:16:12.495574  485611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 13:16:12.495873  485611 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 13:16:12.495962  485611 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-834340"
	I1019 13:16:12.495977  485611 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-834340"
	I1019 13:16:12.496003  485611 host.go:66] Checking if "embed-certs-834340" exists ...
	I1019 13:16:12.496545  485611 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:16:12.497048  485611 config.go:182] Loaded profile config "embed-certs-834340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:16:12.497104  485611 addons.go:69] Setting default-storageclass=true in profile "embed-certs-834340"
	I1019 13:16:12.497134  485611 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-834340"
	I1019 13:16:12.497419  485611 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:16:12.500210  485611 out.go:179] * Verifying Kubernetes components...
	I1019 13:16:12.509970  485611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:16:12.538263  485611 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 13:16:12.544161  485611 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:16:12.544191  485611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 13:16:12.544271  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:16:12.544500  485611 addons.go:238] Setting addon default-storageclass=true in "embed-certs-834340"
	I1019 13:16:12.544534  485611 host.go:66] Checking if "embed-certs-834340" exists ...
	I1019 13:16:12.544958  485611 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:16:12.575164  485611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:16:12.590089  485611 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 13:16:12.590139  485611 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 13:16:12.590203  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:16:12.628224  485611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:16:12.854755  485611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:16:12.943080  485611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 13:16:12.943191  485611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:16:12.989992  485611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 13:16:13.648655  485611 node_ready.go:35] waiting up to 6m0s for node "embed-certs-834340" to be "Ready" ...
	I1019 13:16:13.649000  485611 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1019 13:16:13.695150  485611 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1019 13:16:10.868706  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	W1019 13:16:13.368241  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	I1019 13:16:14.867796  482757 pod_ready.go:94] pod "coredns-66bc5c9577-qp7k5" is "Ready"
	I1019 13:16:14.867825  482757 pod_ready.go:86] duration metric: took 40.005661779s for pod "coredns-66bc5c9577-qp7k5" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:14.870555  482757 pod_ready.go:83] waiting for pod "etcd-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:14.874977  482757 pod_ready.go:94] pod "etcd-no-preload-108149" is "Ready"
	I1019 13:16:14.875006  482757 pod_ready.go:86] duration metric: took 4.423017ms for pod "etcd-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:14.877179  482757 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:14.881744  482757 pod_ready.go:94] pod "kube-apiserver-no-preload-108149" is "Ready"
	I1019 13:16:14.881772  482757 pod_ready.go:86] duration metric: took 4.565855ms for pod "kube-apiserver-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:14.884092  482757 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:15.067143  482757 pod_ready.go:94] pod "kube-controller-manager-no-preload-108149" is "Ready"
	I1019 13:16:15.067175  482757 pod_ready.go:86] duration metric: took 183.057745ms for pod "kube-controller-manager-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:15.266313  482757 pod_ready.go:83] waiting for pod "kube-proxy-qfr27" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:15.666804  482757 pod_ready.go:94] pod "kube-proxy-qfr27" is "Ready"
	I1019 13:16:15.666832  482757 pod_ready.go:86] duration metric: took 400.49093ms for pod "kube-proxy-qfr27" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:13.698020  485611 addons.go:514] duration metric: took 1.202130688s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 13:16:14.154783  485611 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-834340" context rescaled to 1 replicas
	W1019 13:16:15.652505  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	I1019 13:16:15.866798  482757 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:16.266145  482757 pod_ready.go:94] pod "kube-scheduler-no-preload-108149" is "Ready"
	I1019 13:16:16.266173  482757 pod_ready.go:86] duration metric: took 399.34273ms for pod "kube-scheduler-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:16.266187  482757 pod_ready.go:40] duration metric: took 41.407951691s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:16:16.337918  482757 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 13:16:16.341422  482757 out.go:179] * Done! kubectl is now configured to use "no-preload-108149" cluster and "default" namespace by default
	W1019 13:16:18.151452  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	W1019 13:16:20.151637  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	W1019 13:16:22.154454  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	W1019 13:16:24.651560  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	W1019 13:16:27.151602  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	W1019 13:16:29.152908  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 19 13:16:10 no-preload-108149 crio[651]: time="2025-10-19T13:16:10.620423712Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e2a5478e-05ce-49ed-87b0-cf4abfb22bb1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:16:10 no-preload-108149 crio[651]: time="2025-10-19T13:16:10.621388082Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3cb44831-3c2c-4dd8-9c1e-fabbd699ba77 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:16:10 no-preload-108149 crio[651]: time="2025-10-19T13:16:10.622469622Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w/dashboard-metrics-scraper" id=7ced63a7-0eed-401e-8234-f25a42e0f19a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:16:10 no-preload-108149 crio[651]: time="2025-10-19T13:16:10.622755099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:16:10 no-preload-108149 crio[651]: time="2025-10-19T13:16:10.631011998Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:16:10 no-preload-108149 crio[651]: time="2025-10-19T13:16:10.631705349Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:16:10 no-preload-108149 crio[651]: time="2025-10-19T13:16:10.647211913Z" level=info msg="Created container e89abbf84a6ae8fad71347e209ff96a8ac6de8edccf16176b6ff8c53cdf3116b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w/dashboard-metrics-scraper" id=7ced63a7-0eed-401e-8234-f25a42e0f19a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:16:10 no-preload-108149 crio[651]: time="2025-10-19T13:16:10.648325593Z" level=info msg="Starting container: e89abbf84a6ae8fad71347e209ff96a8ac6de8edccf16176b6ff8c53cdf3116b" id=03407226-2dbd-421a-ac21-d822d87f01a0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:16:10 no-preload-108149 crio[651]: time="2025-10-19T13:16:10.65019592Z" level=info msg="Started container" PID=1640 containerID=e89abbf84a6ae8fad71347e209ff96a8ac6de8edccf16176b6ff8c53cdf3116b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w/dashboard-metrics-scraper id=03407226-2dbd-421a-ac21-d822d87f01a0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b926023c2cf0cfe55827f1b70f842c51643b4b2ba9d8f31e9aee0dd12b634a4e
	Oct 19 13:16:10 no-preload-108149 conmon[1638]: conmon e89abbf84a6ae8fad713 <ninfo>: container 1640 exited with status 1
	Oct 19 13:16:11 no-preload-108149 crio[651]: time="2025-10-19T13:16:11.001301254Z" level=info msg="Removing container: 5f998b3bc01b24989030bccd441a62269079dd4cb5f1c38114a640ce6c52cdb9" id=1fd7fd20-0bef-4fba-8dcc-5a721826e768 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:16:11 no-preload-108149 crio[651]: time="2025-10-19T13:16:11.010631161Z" level=info msg="Error loading conmon cgroup of container 5f998b3bc01b24989030bccd441a62269079dd4cb5f1c38114a640ce6c52cdb9: cgroup deleted" id=1fd7fd20-0bef-4fba-8dcc-5a721826e768 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:16:11 no-preload-108149 crio[651]: time="2025-10-19T13:16:11.014539669Z" level=info msg="Removed container 5f998b3bc01b24989030bccd441a62269079dd4cb5f1c38114a640ce6c52cdb9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w/dashboard-metrics-scraper" id=1fd7fd20-0bef-4fba-8dcc-5a721826e768 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.908884343Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.914605807Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.914643018Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.91466815Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.919587207Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.919778947Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.919863822Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.934299521Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.934337782Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.934362799Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.939145009Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.939182647Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	e89abbf84a6ae       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   b926023c2cf0c       dashboard-metrics-scraper-6ffb444bf9-lrg9w   kubernetes-dashboard
	6643d7449e536       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           27 seconds ago       Running             storage-provisioner         2                   70549ea478aaa       storage-provisioner                          kube-system
	75b1666aca773       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   48 seconds ago       Running             kubernetes-dashboard        0                   dc1cfda0db32d       kubernetes-dashboard-855c9754f9-8wvh6        kubernetes-dashboard
	35a081210b7fa       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   46118d1f936a3       kindnet-s5wgc                                kube-system
	3602ce3b8d0b4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   0b93268a36818       coredns-66bc5c9577-qp7k5                     kube-system
	e15c1f7380a9f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   47e281aa26f2f       busybox                                      default
	d7af5087f11ac       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   fa142006911e8       kube-proxy-qfr27                             kube-system
	f06654b2d2683       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           58 seconds ago       Exited              storage-provisioner         1                   70549ea478aaa       storage-provisioner                          kube-system
	0452bd1f37844       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   f58a9c335771b       kube-controller-manager-no-preload-108149    kube-system
	24a75ddccb641       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   a60cbf729c4f7       kube-scheduler-no-preload-108149             kube-system
	b649715b02d1c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   1dc9fe6ebc265       kube-apiserver-no-preload-108149             kube-system
	69cd340c87d96       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   972392ac13c9e       etcd-no-preload-108149                       kube-system
	
	
	==> coredns [3602ce3b8d0b42a07e319435c2d257a4f4c245eb0405e0ad593bf94803f45907] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58333 - 31132 "HINFO IN 2288926805526552410.6672143865960942580. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03362048s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-108149
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-108149
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=no-preload-108149
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T13_14_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 13:14:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-108149
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 13:16:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 13:16:03 +0000   Sun, 19 Oct 2025 13:14:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 13:16:03 +0000   Sun, 19 Oct 2025 13:14:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 13:16:03 +0000   Sun, 19 Oct 2025 13:14:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 13:16:03 +0000   Sun, 19 Oct 2025 13:14:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-108149
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                a4d8c0d2-63fb-4a48-994a-8850e6b21b64
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 coredns-66bc5c9577-qp7k5                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m
	  kube-system                 etcd-no-preload-108149                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m8s
	  kube-system                 kindnet-s5wgc                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m
	  kube-system                 kube-apiserver-no-preload-108149              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-no-preload-108149     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-proxy-qfr27                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-scheduler-no-preload-108149              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-lrg9w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8wvh6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 118s                   kube-proxy       
	  Normal   Starting                 57s                    kube-proxy       
	  Warning  CgroupV1                 2m17s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m17s (x8 over 2m17s)  kubelet          Node no-preload-108149 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m17s (x8 over 2m17s)  kubelet          Node no-preload-108149 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m17s (x8 over 2m17s)  kubelet          Node no-preload-108149 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m4s                   kubelet          Node no-preload-108149 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m4s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m4s                   kubelet          Node no-preload-108149 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m4s                   kubelet          Node no-preload-108149 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m4s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m1s                   node-controller  Node no-preload-108149 event: Registered Node no-preload-108149 in Controller
	  Normal   NodeReady                105s                   kubelet          Node no-preload-108149 status is now: NodeReady
	  Normal   Starting                 68s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)      kubelet          Node no-preload-108149 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)      kubelet          Node no-preload-108149 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x8 over 68s)      kubelet          Node no-preload-108149 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node no-preload-108149 event: Registered Node no-preload-108149 in Controller
	
	
	==> dmesg <==
	[Oct19 12:52] overlayfs: idmapped layers are currently not supported
	[Oct19 12:53] overlayfs: idmapped layers are currently not supported
	[Oct19 12:54] overlayfs: idmapped layers are currently not supported
	[Oct19 12:56] overlayfs: idmapped layers are currently not supported
	[ +16.315179] overlayfs: idmapped layers are currently not supported
	[ +11.914063] overlayfs: idmapped layers are currently not supported
	[Oct19 12:57] overlayfs: idmapped layers are currently not supported
	[Oct19 12:58] overlayfs: idmapped layers are currently not supported
	[ +48.481184] overlayfs: idmapped layers are currently not supported
	[Oct19 12:59] overlayfs: idmapped layers are currently not supported
	[Oct19 13:00] overlayfs: idmapped layers are currently not supported
	[Oct19 13:01] overlayfs: idmapped layers are currently not supported
	[Oct19 13:04] overlayfs: idmapped layers are currently not supported
	[Oct19 13:05] overlayfs: idmapped layers are currently not supported
	[Oct19 13:06] overlayfs: idmapped layers are currently not supported
	[Oct19 13:08] overlayfs: idmapped layers are currently not supported
	[ +38.759554] overlayfs: idmapped layers are currently not supported
	[Oct19 13:10] overlayfs: idmapped layers are currently not supported
	[Oct19 13:11] overlayfs: idmapped layers are currently not supported
	[Oct19 13:12] overlayfs: idmapped layers are currently not supported
	[ +39.991818] overlayfs: idmapped layers are currently not supported
	[Oct19 13:13] overlayfs: idmapped layers are currently not supported
	[Oct19 13:14] overlayfs: idmapped layers are currently not supported
	[Oct19 13:15] overlayfs: idmapped layers are currently not supported
	[ +34.413925] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [69cd340c87d966c00eb54338c8930e6a5166ffc684c24d32e2f7db4bde1a9182] <==
	{"level":"warn","ts":"2025-10-19T13:15:29.932206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:29.955463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.016733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.178155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.190243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.401950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.423859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.466831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.489413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.518188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.547910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.583075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.609488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.651668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.733812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.751926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.763422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.784258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.801258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.830056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.856678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.900440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.926104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.958464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:31.090007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43694","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:16:32 up  2:59,  0 user,  load average: 3.94, 3.33, 2.78
	Linux no-preload-108149 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [35a081210b7fa08acbe3227adf5610734dfa60738cda733fc91359b203bcf29b] <==
	I1019 13:15:33.706258       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:15:33.706449       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 13:15:33.706571       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:15:33.706583       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:15:33.706594       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:15:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:15:33.907476       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:15:33.907534       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:15:33.907588       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:15:33.908728       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 13:16:03.908379       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 13:16:03.908511       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 13:16:03.908636       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 13:16:03.908769       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1019 13:16:05.408262       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 13:16:05.408407       1 metrics.go:72] Registering metrics
	I1019 13:16:05.408511       1 controller.go:711] "Syncing nftables rules"
	I1019 13:16:13.908531       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:16:13.908623       1 main.go:301] handling current node
	I1019 13:16:23.909832       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:16:23.909902       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b649715b02d1cdf3f028d00c9f1eda59d4501cabfe3bf7e05ad588e094515f85] <==
	I1019 13:15:32.226285       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 13:15:32.243625       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:15:32.244923       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1019 13:15:32.244978       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 13:15:32.256973       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 13:15:32.257000       1 policy_source.go:240] refreshing policies
	E1019 13:15:32.276010       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 13:15:32.279894       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 13:15:32.280446       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 13:15:32.323698       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1019 13:15:32.323758       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 13:15:32.332505       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 13:15:32.349936       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 13:15:32.349956       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 13:15:32.499917       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 13:15:32.957641       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 13:15:34.231405       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 13:15:34.381388       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 13:15:34.432954       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 13:15:34.448140       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 13:15:34.699418       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.198.68"}
	I1019 13:15:34.735749       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.49.242"}
	I1019 13:15:35.755119       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 13:15:36.148897       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 13:15:36.278782       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0452bd1f37844e20d71713464f7c02412906aa5aeab0336266163b06aba35d56] <==
	I1019 13:15:35.705243       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:15:35.708039       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 13:15:35.708061       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 13:15:35.710179       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 13:15:35.705287       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 13:15:35.705299       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 13:15:35.705253       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 13:15:35.705269       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 13:15:35.705278       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 13:15:35.716908       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 13:15:35.717185       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 13:15:35.717263       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 13:15:35.717313       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 13:15:35.718060       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:15:35.721797       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 13:15:35.721893       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 13:15:35.722283       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 13:15:35.737135       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 13:15:35.743612       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 13:15:35.746398       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 13:15:35.766677       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 13:15:35.766797       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:15:35.770431       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:15:35.788338       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 13:15:35.788398       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [d7af5087f11ac0a282a7c09f5c3f2ad9affeab8823717f75f713a854c8124884] <==
	I1019 13:15:34.119283       1 server_linux.go:53] "Using iptables proxy"
	I1019 13:15:34.275176       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 13:15:34.380250       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 13:15:34.380297       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 13:15:34.380362       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 13:15:34.546683       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:15:34.546812       1 server_linux.go:132] "Using iptables Proxier"
	I1019 13:15:34.552008       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 13:15:34.552383       1 server.go:527] "Version info" version="v1.34.1"
	I1019 13:15:34.552598       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:15:34.562556       1 config.go:200] "Starting service config controller"
	I1019 13:15:34.562591       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 13:15:34.562615       1 config.go:106] "Starting endpoint slice config controller"
	I1019 13:15:34.562620       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 13:15:34.562632       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 13:15:34.562636       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 13:15:34.567806       1 config.go:309] "Starting node config controller"
	I1019 13:15:34.567823       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 13:15:34.567830       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 13:15:34.663704       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 13:15:34.663749       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 13:15:34.792517       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [24a75ddccb641f284753e265035d0ec049f86894b9a8bb4c8eb68267f2a6bbd3] <==
	I1019 13:15:26.503303       1 serving.go:386] Generated self-signed cert in-memory
	W1019 13:15:32.160159       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 13:15:32.160191       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 13:15:32.160201       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 13:15:32.160211       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 13:15:32.270168       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 13:15:32.270291       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:15:32.279451       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 13:15:32.282545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 13:15:32.282573       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:15:32.331079       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:15:32.441810       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 13:15:33 no-preload-108149 kubelet[767]: W1019 13:15:33.045779     767 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0/crio-46118d1f936a3073a1759f158849143bcff5bad3532c932be2b41f40f9bbe7a1 WatchSource:0}: Error finding container 46118d1f936a3073a1759f158849143bcff5bad3532c932be2b41f40f9bbe7a1: Status 404 returned error can't find the container with id 46118d1f936a3073a1759f158849143bcff5bad3532c932be2b41f40f9bbe7a1
	Oct 19 13:15:36 no-preload-108149 kubelet[767]: I1019 13:15:36.369971     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wklg4\" (UniqueName: \"kubernetes.io/projected/20bcb516-2b35-4c2e-af84-2110a56382b9-kube-api-access-wklg4\") pod \"dashboard-metrics-scraper-6ffb444bf9-lrg9w\" (UID: \"20bcb516-2b35-4c2e-af84-2110a56382b9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w"
	Oct 19 13:15:36 no-preload-108149 kubelet[767]: I1019 13:15:36.370036     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5mfm\" (UniqueName: \"kubernetes.io/projected/1e8b4000-201a-4e13-a3ec-4b0799d1f3cd-kube-api-access-c5mfm\") pod \"kubernetes-dashboard-855c9754f9-8wvh6\" (UID: \"1e8b4000-201a-4e13-a3ec-4b0799d1f3cd\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8wvh6"
	Oct 19 13:15:36 no-preload-108149 kubelet[767]: I1019 13:15:36.370074     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1e8b4000-201a-4e13-a3ec-4b0799d1f3cd-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-8wvh6\" (UID: \"1e8b4000-201a-4e13-a3ec-4b0799d1f3cd\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8wvh6"
	Oct 19 13:15:36 no-preload-108149 kubelet[767]: I1019 13:15:36.370095     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/20bcb516-2b35-4c2e-af84-2110a56382b9-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-lrg9w\" (UID: \"20bcb516-2b35-4c2e-af84-2110a56382b9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w"
	Oct 19 13:15:36 no-preload-108149 kubelet[767]: W1019 13:15:36.634457     767 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0/crio-b926023c2cf0cfe55827f1b70f842c51643b4b2ba9d8f31e9aee0dd12b634a4e WatchSource:0}: Error finding container b926023c2cf0cfe55827f1b70f842c51643b4b2ba9d8f31e9aee0dd12b634a4e: Status 404 returned error can't find the container with id b926023c2cf0cfe55827f1b70f842c51643b4b2ba9d8f31e9aee0dd12b634a4e
	Oct 19 13:15:49 no-preload-108149 kubelet[767]: I1019 13:15:49.936482     767 scope.go:117] "RemoveContainer" containerID="1fe8e0af5771f032baab83ed8cf4f208ff0d3ba37df65f7ce007aae30ca71716"
	Oct 19 13:15:49 no-preload-108149 kubelet[767]: I1019 13:15:49.975504     767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8wvh6" podStartSLOduration=7.132295523 podStartE2EDuration="13.975358366s" podCreationTimestamp="2025-10-19 13:15:36 +0000 UTC" firstStartedPulling="2025-10-19 13:15:36.609273399 +0000 UTC m=+13.344460867" lastFinishedPulling="2025-10-19 13:15:43.452336241 +0000 UTC m=+20.187523710" observedRunningTime="2025-10-19 13:15:43.925344937 +0000 UTC m=+20.660532414" watchObservedRunningTime="2025-10-19 13:15:49.975358366 +0000 UTC m=+26.710545834"
	Oct 19 13:15:50 no-preload-108149 kubelet[767]: I1019 13:15:50.941661     767 scope.go:117] "RemoveContainer" containerID="5f998b3bc01b24989030bccd441a62269079dd4cb5f1c38114a640ce6c52cdb9"
	Oct 19 13:15:50 no-preload-108149 kubelet[767]: E1019 13:15:50.942285     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrg9w_kubernetes-dashboard(20bcb516-2b35-4c2e-af84-2110a56382b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w" podUID="20bcb516-2b35-4c2e-af84-2110a56382b9"
	Oct 19 13:15:50 no-preload-108149 kubelet[767]: I1019 13:15:50.942900     767 scope.go:117] "RemoveContainer" containerID="1fe8e0af5771f032baab83ed8cf4f208ff0d3ba37df65f7ce007aae30ca71716"
	Oct 19 13:15:51 no-preload-108149 kubelet[767]: I1019 13:15:51.945457     767 scope.go:117] "RemoveContainer" containerID="5f998b3bc01b24989030bccd441a62269079dd4cb5f1c38114a640ce6c52cdb9"
	Oct 19 13:15:51 no-preload-108149 kubelet[767]: E1019 13:15:51.945619     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrg9w_kubernetes-dashboard(20bcb516-2b35-4c2e-af84-2110a56382b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w" podUID="20bcb516-2b35-4c2e-af84-2110a56382b9"
	Oct 19 13:15:56 no-preload-108149 kubelet[767]: I1019 13:15:56.585916     767 scope.go:117] "RemoveContainer" containerID="5f998b3bc01b24989030bccd441a62269079dd4cb5f1c38114a640ce6c52cdb9"
	Oct 19 13:15:56 no-preload-108149 kubelet[767]: E1019 13:15:56.586121     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrg9w_kubernetes-dashboard(20bcb516-2b35-4c2e-af84-2110a56382b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w" podUID="20bcb516-2b35-4c2e-af84-2110a56382b9"
	Oct 19 13:16:03 no-preload-108149 kubelet[767]: I1019 13:16:03.976701     767 scope.go:117] "RemoveContainer" containerID="f06654b2d2683ec240f70fa86e309b5a103311a29fb5afb2f214482a14902133"
	Oct 19 13:16:10 no-preload-108149 kubelet[767]: I1019 13:16:10.619712     767 scope.go:117] "RemoveContainer" containerID="5f998b3bc01b24989030bccd441a62269079dd4cb5f1c38114a640ce6c52cdb9"
	Oct 19 13:16:10 no-preload-108149 kubelet[767]: I1019 13:16:10.998750     767 scope.go:117] "RemoveContainer" containerID="5f998b3bc01b24989030bccd441a62269079dd4cb5f1c38114a640ce6c52cdb9"
	Oct 19 13:16:10 no-preload-108149 kubelet[767]: I1019 13:16:10.999049     767 scope.go:117] "RemoveContainer" containerID="e89abbf84a6ae8fad71347e209ff96a8ac6de8edccf16176b6ff8c53cdf3116b"
	Oct 19 13:16:10 no-preload-108149 kubelet[767]: E1019 13:16:10.999205     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrg9w_kubernetes-dashboard(20bcb516-2b35-4c2e-af84-2110a56382b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w" podUID="20bcb516-2b35-4c2e-af84-2110a56382b9"
	Oct 19 13:16:16 no-preload-108149 kubelet[767]: I1019 13:16:16.586274     767 scope.go:117] "RemoveContainer" containerID="e89abbf84a6ae8fad71347e209ff96a8ac6de8edccf16176b6ff8c53cdf3116b"
	Oct 19 13:16:16 no-preload-108149 kubelet[767]: E1019 13:16:16.586451     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrg9w_kubernetes-dashboard(20bcb516-2b35-4c2e-af84-2110a56382b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w" podUID="20bcb516-2b35-4c2e-af84-2110a56382b9"
	Oct 19 13:16:28 no-preload-108149 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 13:16:28 no-preload-108149 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 13:16:28 no-preload-108149 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [75b1666aca773065101164715baec4b2ea6e97910e9b1b816056fe57b3894d8b] <==
	2025/10/19 13:15:43 Starting overwatch
	2025/10/19 13:15:43 Using namespace: kubernetes-dashboard
	2025/10/19 13:15:43 Using in-cluster config to connect to apiserver
	2025/10/19 13:15:43 Using secret token for csrf signing
	2025/10/19 13:15:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 13:15:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 13:15:43 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 13:15:43 Generating JWE encryption key
	2025/10/19 13:15:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 13:15:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 13:15:44 Initializing JWE encryption key from synchronized object
	2025/10/19 13:15:44 Creating in-cluster Sidecar client
	2025/10/19 13:15:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 13:15:44 Serving insecurely on HTTP port: 9090
	2025/10/19 13:16:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6643d7449e536bebc7c48cb509e939d206eca0e67efbecc9a49f6f230d6a8f2e] <==
	I1019 13:16:04.082717       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 13:16:04.082866       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 13:16:04.087976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:07.549796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:11.810617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:15.410941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:18.464681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:21.486478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:21.491433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 13:16:21.491587       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 13:16:21.491770       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-108149_d7617f91-b828-455b-aa0b-eeb97a558d7e!
	I1019 13:16:21.492604       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a5ba3f0-b17e-4468-873b-e2df26dbba12", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-108149_d7617f91-b828-455b-aa0b-eeb97a558d7e became leader
	W1019 13:16:21.496761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:21.501915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 13:16:21.592168       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-108149_d7617f91-b828-455b-aa0b-eeb97a558d7e!
	W1019 13:16:23.504769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:23.509591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:25.512779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:25.519465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:27.522496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:27.526898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:29.530352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:29.534831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:31.538641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:31.549443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f06654b2d2683ec240f70fa86e309b5a103311a29fb5afb2f214482a14902133] <==
	I1019 13:15:33.882215       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 13:16:03.884702       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-108149 -n no-preload-108149
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-108149 -n no-preload-108149: exit status 2 (369.644638ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-108149 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-108149
helpers_test.go:243: (dbg) docker inspect no-preload-108149:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0",
	        "Created": "2025-10-19T13:13:42.966864471Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 482890,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T13:15:15.996140266Z",
	            "FinishedAt": "2025-10-19T13:15:15.126243591Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0/hostname",
	        "HostsPath": "/var/lib/docker/containers/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0/hosts",
	        "LogPath": "/var/lib/docker/containers/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0-json.log",
	        "Name": "/no-preload-108149",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-108149:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-108149",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0",
	                "LowerDir": "/var/lib/docker/overlay2/ca33adf3602bb1f3e90dd2bca8f00da7d19763fa3c96fba2f19c6b9ace8c8b7b-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ca33adf3602bb1f3e90dd2bca8f00da7d19763fa3c96fba2f19c6b9ace8c8b7b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ca33adf3602bb1f3e90dd2bca8f00da7d19763fa3c96fba2f19c6b9ace8c8b7b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ca33adf3602bb1f3e90dd2bca8f00da7d19763fa3c96fba2f19c6b9ace8c8b7b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-108149",
	                "Source": "/var/lib/docker/volumes/no-preload-108149/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-108149",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-108149",
	                "name.minikube.sigs.k8s.io": "no-preload-108149",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f9e1e1a5e674528b28986c495abb864248ebbfb26d7dd8d3c64b6959fa218ce3",
	            "SandboxKey": "/var/run/docker/netns/f9e1e1a5e674",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-108149": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:ee:0a:2b:21:9c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "02fa40d5a7624754fb29434a70126850295cfdc9e5c6d2dc3c5e97dc6c14e8ed",
	                    "EndpointID": "d091971df39baec63a66d8c438a14ddce8f775d545a415c7dcb4bc72a88cdb7e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-108149",
	                        "4857474c82b9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-108149 -n no-preload-108149
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-108149 -n no-preload-108149: exit status 2 (365.861728ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-108149 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-108149 logs -n 25: (1.304780499s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ force-systemd-flag-606072 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-606072 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ delete  │ -p force-systemd-flag-606072                                                                                                                                                                                                                  │ force-systemd-flag-606072 │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:11 UTC │
	│ start   │ -p cert-options-264135 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:11 UTC │ 19 Oct 25 13:12 UTC │
	│ ssh     │ cert-options-264135 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ ssh     │ -p cert-options-264135 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ delete  │ -p cert-options-264135                                                                                                                                                                                                                        │ cert-options-264135       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ start   │ -p old-k8s-version-842494 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:13 UTC │
	│ start   │ -p cert-expiration-088393 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-088393    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:13 UTC │
	│ delete  │ -p cert-expiration-088393                                                                                                                                                                                                                     │ cert-expiration-088393    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:13 UTC │
	│ start   │ -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-842494 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │                     │
	│ stop    │ -p old-k8s-version-842494 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-842494 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:14 UTC │ 19 Oct 25 13:14 UTC │
	│ start   │ -p old-k8s-version-842494 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:14 UTC │ 19 Oct 25 13:15 UTC │
	│ addons  │ enable metrics-server -p no-preload-108149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ stop    │ -p no-preload-108149 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ addons  │ enable dashboard -p no-preload-108149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ start   │ -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:16 UTC │
	│ image   │ old-k8s-version-842494 image list --format=json                                                                                                                                                                                               │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ pause   │ -p old-k8s-version-842494 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ delete  │ -p old-k8s-version-842494                                                                                                                                                                                                                     │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ delete  │ -p old-k8s-version-842494                                                                                                                                                                                                                     │ old-k8s-version-842494    │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ start   │ -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834340        │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ image   │ no-preload-108149 image list --format=json                                                                                                                                                                                                    │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ pause   │ -p no-preload-108149 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-108149         │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 13:15:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 13:15:30.752153  485611 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:15:30.752274  485611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:15:30.752279  485611 out.go:374] Setting ErrFile to fd 2...
	I1019 13:15:30.752284  485611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:15:30.752547  485611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:15:30.752947  485611 out.go:368] Setting JSON to false
	I1019 13:15:30.754243  485611 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10681,"bootTime":1760869050,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 13:15:30.754316  485611 start.go:141] virtualization:  
	I1019 13:15:30.758374  485611 out.go:179] * [embed-certs-834340] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 13:15:30.761867  485611 notify.go:220] Checking for updates...
	I1019 13:15:30.761834  485611 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:15:30.765621  485611 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:15:30.768796  485611 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:15:30.771931  485611 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 13:15:30.774941  485611 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 13:15:30.778128  485611 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:15:30.781610  485611 config.go:182] Loaded profile config "no-preload-108149": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:15:30.781719  485611 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:15:30.827365  485611 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 13:15:30.827494  485611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:15:30.929881  485611 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 13:15:30.92028979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:15:30.929986  485611 docker.go:318] overlay module found
	I1019 13:15:30.933282  485611 out.go:179] * Using the docker driver based on user configuration
	I1019 13:15:30.936285  485611 start.go:305] selected driver: docker
	I1019 13:15:30.936310  485611 start.go:925] validating driver "docker" against <nil>
	I1019 13:15:30.936325  485611 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:15:30.937188  485611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:15:31.046805  485611 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 13:15:31.035241044 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:15:31.046997  485611 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 13:15:31.047228  485611 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:15:31.050265  485611 out.go:179] * Using Docker driver with root privileges
	I1019 13:15:31.053099  485611 cni.go:84] Creating CNI manager for ""
	I1019 13:15:31.053173  485611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:15:31.053188  485611 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 13:15:31.053277  485611 start.go:349] cluster config:
	{Name:embed-certs-834340 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-834340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:15:31.058344  485611 out.go:179] * Starting "embed-certs-834340" primary control-plane node in "embed-certs-834340" cluster
	I1019 13:15:31.061212  485611 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 13:15:31.064148  485611 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 13:15:31.066955  485611 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:15:31.067016  485611 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 13:15:31.067030  485611 cache.go:58] Caching tarball of preloaded images
	I1019 13:15:31.067139  485611 preload.go:233] Found /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 13:15:31.067155  485611 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 13:15:31.067264  485611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/config.json ...
	I1019 13:15:31.067288  485611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/config.json: {Name:mkf044743046292d05dcaa840723539dd448573b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:15:31.067464  485611 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 13:15:31.096808  485611 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 13:15:31.096835  485611 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 13:15:31.096848  485611 cache.go:232] Successfully downloaded all kic artifacts
	I1019 13:15:31.096871  485611 start.go:360] acquireMachinesLock for embed-certs-834340: {Name:mka158a8ff4f9c1986944dd404295df0d84afabc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:15:31.096982  485611 start.go:364] duration metric: took 89.831µs to acquireMachinesLock for "embed-certs-834340"
	I1019 13:15:31.097013  485611 start.go:93] Provisioning new machine with config: &{Name:embed-certs-834340 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-834340 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:15:31.097093  485611 start.go:125] createHost starting for "" (driver="docker")
	I1019 13:15:32.417463  482757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.163997529s)
	I1019 13:15:34.763049  482757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.487791173s)
	I1019 13:15:34.763096  482757 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.472351196s)
	I1019 13:15:34.763133  482757 node_ready.go:35] waiting up to 6m0s for node "no-preload-108149" to be "Ready" ...
	I1019 13:15:34.763433  482757 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.932582008s)
	I1019 13:15:34.766570  482757 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-108149 addons enable metrics-server
	
	I1019 13:15:34.769594  482757 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1019 13:15:34.772539  482757 addons.go:514] duration metric: took 9.996248096s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1019 13:15:34.774931  482757 node_ready.go:49] node "no-preload-108149" is "Ready"
	I1019 13:15:34.774999  482757 node_ready.go:38] duration metric: took 11.853033ms for node "no-preload-108149" to be "Ready" ...
	I1019 13:15:34.775039  482757 api_server.go:52] waiting for apiserver process to appear ...
	I1019 13:15:34.775133  482757 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 13:15:34.795761  482757 api_server.go:72] duration metric: took 10.018569627s to wait for apiserver process to appear ...
	I1019 13:15:34.795785  482757 api_server.go:88] waiting for apiserver healthz status ...
	I1019 13:15:34.795805  482757 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 13:15:34.806028  482757 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 13:15:34.807285  482757 api_server.go:141] control plane version: v1.34.1
	I1019 13:15:34.807348  482757 api_server.go:131] duration metric: took 11.554682ms to wait for apiserver health ...
	I1019 13:15:34.807372  482757 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 13:15:34.813657  482757 system_pods.go:59] 8 kube-system pods found
	I1019 13:15:34.813749  482757 system_pods.go:61] "coredns-66bc5c9577-qp7k5" [0f0731c8-758f-4a89-9d62-19ff52f8d9ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:15:34.813776  482757 system_pods.go:61] "etcd-no-preload-108149" [288fa476-5552-477a-8958-75fb017c1f15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:15:34.813814  482757 system_pods.go:61] "kindnet-s5wgc" [eecfcd8e-961b-4469-8bab-a15f4053fcae] Running
	I1019 13:15:34.813842  482757 system_pods.go:61] "kube-apiserver-no-preload-108149" [7fc22236-bfa6-43f2-888e-899c1802dccf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:15:34.813865  482757 system_pods.go:61] "kube-controller-manager-no-preload-108149" [589ab894-5b6a-4901-ae64-033a1841821c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 13:15:34.813903  482757 system_pods.go:61] "kube-proxy-qfr27" [12f5f5aa-7552-44bc-9a49-879a274e9a57] Running
	I1019 13:15:34.813931  482757 system_pods.go:61] "kube-scheduler-no-preload-108149" [fd497e0f-9bce-4bda-850f-ddc249fc05c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 13:15:34.813953  482757 system_pods.go:61] "storage-provisioner" [7de7f3d6-6098-48a3-966a-f0a82622bdeb] Running
	I1019 13:15:34.813991  482757 system_pods.go:74] duration metric: took 6.598201ms to wait for pod list to return data ...
	I1019 13:15:34.814018  482757 default_sa.go:34] waiting for default service account to be created ...
	I1019 13:15:34.818593  482757 default_sa.go:45] found service account: "default"
	I1019 13:15:34.818667  482757 default_sa.go:55] duration metric: took 4.626212ms for default service account to be created ...
	I1019 13:15:34.818691  482757 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 13:15:34.822386  482757 system_pods.go:86] 8 kube-system pods found
	I1019 13:15:34.822465  482757 system_pods.go:89] "coredns-66bc5c9577-qp7k5" [0f0731c8-758f-4a89-9d62-19ff52f8d9ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:15:34.822491  482757 system_pods.go:89] "etcd-no-preload-108149" [288fa476-5552-477a-8958-75fb017c1f15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:15:34.822514  482757 system_pods.go:89] "kindnet-s5wgc" [eecfcd8e-961b-4469-8bab-a15f4053fcae] Running
	I1019 13:15:34.822555  482757 system_pods.go:89] "kube-apiserver-no-preload-108149" [7fc22236-bfa6-43f2-888e-899c1802dccf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:15:34.822576  482757 system_pods.go:89] "kube-controller-manager-no-preload-108149" [589ab894-5b6a-4901-ae64-033a1841821c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 13:15:34.822596  482757 system_pods.go:89] "kube-proxy-qfr27" [12f5f5aa-7552-44bc-9a49-879a274e9a57] Running
	I1019 13:15:34.822635  482757 system_pods.go:89] "kube-scheduler-no-preload-108149" [fd497e0f-9bce-4bda-850f-ddc249fc05c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 13:15:34.822653  482757 system_pods.go:89] "storage-provisioner" [7de7f3d6-6098-48a3-966a-f0a82622bdeb] Running
	I1019 13:15:34.822689  482757 system_pods.go:126] duration metric: took 3.964607ms to wait for k8s-apps to be running ...
	I1019 13:15:34.822713  482757 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 13:15:34.822797  482757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:15:34.839526  482757 system_svc.go:56] duration metric: took 16.803975ms WaitForService to wait for kubelet
	I1019 13:15:34.839604  482757 kubeadm.go:586] duration metric: took 10.062417338s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:15:34.839650  482757 node_conditions.go:102] verifying NodePressure condition ...
	I1019 13:15:34.847859  482757 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 13:15:34.847949  482757 node_conditions.go:123] node cpu capacity is 2
	I1019 13:15:34.847977  482757 node_conditions.go:105] duration metric: took 8.293929ms to run NodePressure ...
	I1019 13:15:34.848003  482757 start.go:241] waiting for startup goroutines ...
	I1019 13:15:34.848034  482757 start.go:246] waiting for cluster config update ...
	I1019 13:15:34.848073  482757 start.go:255] writing updated cluster config ...
	I1019 13:15:34.848418  482757 ssh_runner.go:195] Run: rm -f paused
	I1019 13:15:34.858155  482757 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:15:34.862088  482757 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qp7k5" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:15:31.100592  485611 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 13:15:31.100830  485611 start.go:159] libmachine.API.Create for "embed-certs-834340" (driver="docker")
	I1019 13:15:31.100877  485611 client.go:168] LocalClient.Create starting
	I1019 13:15:31.100943  485611 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem
	I1019 13:15:31.100992  485611 main.go:141] libmachine: Decoding PEM data...
	I1019 13:15:31.101011  485611 main.go:141] libmachine: Parsing certificate...
	I1019 13:15:31.101072  485611 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem
	I1019 13:15:31.101096  485611 main.go:141] libmachine: Decoding PEM data...
	I1019 13:15:31.101110  485611 main.go:141] libmachine: Parsing certificate...
	I1019 13:15:31.101487  485611 cli_runner.go:164] Run: docker network inspect embed-certs-834340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 13:15:31.137783  485611 cli_runner.go:211] docker network inspect embed-certs-834340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 13:15:31.137891  485611 network_create.go:284] running [docker network inspect embed-certs-834340] to gather additional debugging logs...
	I1019 13:15:31.137915  485611 cli_runner.go:164] Run: docker network inspect embed-certs-834340
	W1019 13:15:31.155705  485611 cli_runner.go:211] docker network inspect embed-certs-834340 returned with exit code 1
	I1019 13:15:31.155739  485611 network_create.go:287] error running [docker network inspect embed-certs-834340]: docker network inspect embed-certs-834340: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-834340 not found
	I1019 13:15:31.155755  485611 network_create.go:289] output of [docker network inspect embed-certs-834340]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-834340 not found
	
	** /stderr **
	I1019 13:15:31.155889  485611 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:15:31.173929  485611 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-319c97358c5c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2a:99:c3:44:12:51} reservation:<nil>}
	I1019 13:15:31.174199  485611 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5c09b33e0936 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:93:4b:f6:fd:1c} reservation:<nil>}
	I1019 13:15:31.174513  485611 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2c2bbaadd4a8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:8f:96:27:48:2c} reservation:<nil>}
	I1019 13:15:31.174817  485611 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-02fa40d5a762 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:3b:ad:6d:17:1b} reservation:<nil>}
	I1019 13:15:31.175224  485611 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f1320}
	I1019 13:15:31.175241  485611 network_create.go:124] attempt to create docker network embed-certs-834340 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1019 13:15:31.175299  485611 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-834340 embed-certs-834340
	I1019 13:15:31.238899  485611 network_create.go:108] docker network embed-certs-834340 192.168.85.0/24 created
	I1019 13:15:31.238928  485611 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-834340" container
	I1019 13:15:31.239013  485611 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 13:15:31.265841  485611 cli_runner.go:164] Run: docker volume create embed-certs-834340 --label name.minikube.sigs.k8s.io=embed-certs-834340 --label created_by.minikube.sigs.k8s.io=true
	I1019 13:15:31.294108  485611 oci.go:103] Successfully created a docker volume embed-certs-834340
	I1019 13:15:31.294208  485611 cli_runner.go:164] Run: docker run --rm --name embed-certs-834340-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-834340 --entrypoint /usr/bin/test -v embed-certs-834340:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 13:15:32.013273  485611 oci.go:107] Successfully prepared a docker volume embed-certs-834340
	I1019 13:15:32.013351  485611 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:15:32.013376  485611 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 13:15:32.013455  485611 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-834340:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1019 13:15:36.868283  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	W1019 13:15:38.881873  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	I1019 13:15:37.787570  485611 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-834340:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.774071472s)
	I1019 13:15:37.787600  485611 kic.go:203] duration metric: took 5.774220438s to extract preloaded images to volume ...
	W1019 13:15:37.787739  485611 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 13:15:37.787838  485611 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 13:15:37.869837  485611 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-834340 --name embed-certs-834340 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-834340 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-834340 --network embed-certs-834340 --ip 192.168.85.2 --volume embed-certs-834340:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 13:15:38.233152  485611 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Running}}
	I1019 13:15:38.255987  485611 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:15:38.281586  485611 cli_runner.go:164] Run: docker exec embed-certs-834340 stat /var/lib/dpkg/alternatives/iptables
	I1019 13:15:38.368772  485611 oci.go:144] the created container "embed-certs-834340" has a running status.
	I1019 13:15:38.368806  485611 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa...
	I1019 13:15:38.873953  485611 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 13:15:38.904296  485611 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:15:38.935449  485611 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 13:15:38.935469  485611 kic_runner.go:114] Args: [docker exec --privileged embed-certs-834340 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 13:15:39.027042  485611 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:15:39.054313  485611 machine.go:93] provisionDockerMachine start ...
	I1019 13:15:39.054396  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:15:39.087401  485611 main.go:141] libmachine: Using SSH client type: native
	I1019 13:15:39.087785  485611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1019 13:15:39.087805  485611 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 13:15:39.088436  485611 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55330->127.0.0.1:33438: read: connection reset by peer
	W1019 13:15:41.375929  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	W1019 13:15:43.887675  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	I1019 13:15:42.270174  485611 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-834340
	
	I1019 13:15:42.270200  485611 ubuntu.go:182] provisioning hostname "embed-certs-834340"
	I1019 13:15:42.270317  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:15:42.304756  485611 main.go:141] libmachine: Using SSH client type: native
	I1019 13:15:42.305079  485611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1019 13:15:42.305096  485611 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-834340 && echo "embed-certs-834340" | sudo tee /etc/hostname
	I1019 13:15:42.487800  485611 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-834340
	
	I1019 13:15:42.487937  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:15:42.514794  485611 main.go:141] libmachine: Using SSH client type: native
	I1019 13:15:42.515135  485611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1019 13:15:42.515152  485611 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-834340' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-834340/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-834340' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 13:15:42.666439  485611 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 13:15:42.666525  485611 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 13:15:42.666561  485611 ubuntu.go:190] setting up certificates
	I1019 13:15:42.666584  485611 provision.go:84] configureAuth start
	I1019 13:15:42.666677  485611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-834340
	I1019 13:15:42.705847  485611 provision.go:143] copyHostCerts
	I1019 13:15:42.705925  485611 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem, removing ...
	I1019 13:15:42.705934  485611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem
	I1019 13:15:42.706003  485611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 13:15:42.706086  485611 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem, removing ...
	I1019 13:15:42.706102  485611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem
	I1019 13:15:42.706130  485611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 13:15:42.706208  485611 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem, removing ...
	I1019 13:15:42.706220  485611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem
	I1019 13:15:42.706247  485611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 13:15:42.706315  485611 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.embed-certs-834340 san=[127.0.0.1 192.168.85.2 embed-certs-834340 localhost minikube]
	I1019 13:15:43.156249  485611 provision.go:177] copyRemoteCerts
	I1019 13:15:43.156363  485611 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 13:15:43.156427  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:15:43.174572  485611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:15:43.278859  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 13:15:43.300911  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 13:15:43.326576  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1019 13:15:43.352381  485611 provision.go:87] duration metric: took 685.759035ms to configureAuth
	I1019 13:15:43.352414  485611 ubuntu.go:206] setting minikube options for container-runtime
	I1019 13:15:43.352730  485611 config.go:182] Loaded profile config "embed-certs-834340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:15:43.352908  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:15:43.379173  485611 main.go:141] libmachine: Using SSH client type: native
	I1019 13:15:43.379505  485611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1019 13:15:43.379528  485611 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 13:15:43.813644  485611 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 13:15:43.813668  485611 machine.go:96] duration metric: took 4.759335125s to provisionDockerMachine
	I1019 13:15:43.813692  485611 client.go:171] duration metric: took 12.712790241s to LocalClient.Create
	I1019 13:15:43.813725  485611 start.go:167] duration metric: took 12.712896639s to libmachine.API.Create "embed-certs-834340"
	I1019 13:15:43.813738  485611 start.go:293] postStartSetup for "embed-certs-834340" (driver="docker")
	I1019 13:15:43.813750  485611 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 13:15:43.813834  485611 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 13:15:43.813893  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:15:43.853764  485611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:15:43.975754  485611 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 13:15:43.979773  485611 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 13:15:43.979807  485611 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 13:15:43.979819  485611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/addons for local assets ...
	I1019 13:15:43.979879  485611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/files for local assets ...
	I1019 13:15:43.979971  485611 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem -> 2945182.pem in /etc/ssl/certs
	I1019 13:15:43.980084  485611 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 13:15:43.990366  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:15:44.026850  485611 start.go:296] duration metric: took 213.095418ms for postStartSetup
	I1019 13:15:44.027265  485611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-834340
	I1019 13:15:44.054045  485611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/config.json ...
	I1019 13:15:44.054331  485611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 13:15:44.054392  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:15:44.079758  485611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:15:44.193132  485611 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 13:15:44.198727  485611 start.go:128] duration metric: took 13.101618335s to createHost
	I1019 13:15:44.198749  485611 start.go:83] releasing machines lock for "embed-certs-834340", held for 13.10175117s
	I1019 13:15:44.198819  485611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-834340
	I1019 13:15:44.230748  485611 ssh_runner.go:195] Run: cat /version.json
	I1019 13:15:44.230799  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:15:44.231038  485611 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 13:15:44.231104  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:15:44.262295  485611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:15:44.278464  485611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:15:44.410320  485611 ssh_runner.go:195] Run: systemctl --version
	I1019 13:15:44.515307  485611 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 13:15:44.598169  485611 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 13:15:44.607951  485611 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 13:15:44.608027  485611 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 13:15:44.645950  485611 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 13:15:44.645976  485611 start.go:495] detecting cgroup driver to use...
	I1019 13:15:44.646006  485611 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 13:15:44.646063  485611 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 13:15:44.671924  485611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 13:15:44.695262  485611 docker.go:218] disabling cri-docker service (if available) ...
	I1019 13:15:44.695329  485611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 13:15:44.719033  485611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 13:15:44.748004  485611 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 13:15:44.931648  485611 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 13:15:45.186177  485611 docker.go:234] disabling docker service ...
	I1019 13:15:45.186333  485611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 13:15:45.238422  485611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 13:15:45.261985  485611 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 13:15:45.483407  485611 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 13:15:45.662095  485611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 13:15:45.677877  485611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 13:15:45.694861  485611 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 13:15:45.694929  485611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:45.705250  485611 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 13:15:45.705321  485611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:45.717192  485611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:45.727649  485611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:45.737493  485611 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 13:15:45.746952  485611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:45.758879  485611 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:45.774911  485611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:15:45.786138  485611 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 13:15:45.795280  485611 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 13:15:45.815974  485611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:15:45.976264  485611 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 13:15:46.662235  485611 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 13:15:46.662342  485611 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 13:15:46.668459  485611 start.go:563] Will wait 60s for crictl version
	I1019 13:15:46.668559  485611 ssh_runner.go:195] Run: which crictl
	I1019 13:15:46.672663  485611 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 13:15:46.720764  485611 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 13:15:46.720881  485611 ssh_runner.go:195] Run: crio --version
	I1019 13:15:46.759144  485611 ssh_runner.go:195] Run: crio --version
	I1019 13:15:46.800654  485611 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 13:15:46.803912  485611 cli_runner.go:164] Run: docker network inspect embed-certs-834340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:15:46.829336  485611 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 13:15:46.833973  485611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:15:46.847307  485611 kubeadm.go:883] updating cluster {Name:embed-certs-834340 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-834340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 13:15:46.847433  485611 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:15:46.847493  485611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:15:46.903960  485611 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:15:46.903986  485611 crio.go:433] Images already preloaded, skipping extraction
	I1019 13:15:46.904040  485611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:15:46.931965  485611 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:15:46.931990  485611 cache_images.go:85] Images are preloaded, skipping loading
	I1019 13:15:46.931999  485611 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1019 13:15:46.932096  485611 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-834340 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-834340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 13:15:46.932176  485611 ssh_runner.go:195] Run: crio config
	I1019 13:15:47.007261  485611 cni.go:84] Creating CNI manager for ""
	I1019 13:15:47.007283  485611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:15:47.007333  485611 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 13:15:47.007364  485611 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-834340 NodeName:embed-certs-834340 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 13:15:47.007539  485611 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-834340"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 13:15:47.007627  485611 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 13:15:47.016474  485611 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 13:15:47.016548  485611 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 13:15:47.023902  485611 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1019 13:15:47.038063  485611 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 13:15:47.064478  485611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1019 13:15:47.082948  485611 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 13:15:47.087685  485611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:15:47.102886  485611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:15:47.263124  485611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:15:47.280687  485611 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340 for IP: 192.168.85.2
	I1019 13:15:47.280708  485611 certs.go:195] generating shared ca certs ...
	I1019 13:15:47.280725  485611 certs.go:227] acquiring lock for ca certs: {Name:mk8f2f1c683cf5104ef70f6f3d59bf8f6240d633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:15:47.280924  485611 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key
	I1019 13:15:47.280990  485611 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key
	I1019 13:15:47.281000  485611 certs.go:257] generating profile certs ...
	I1019 13:15:47.281080  485611 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/client.key
	I1019 13:15:47.281105  485611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/client.crt with IP's: []
	I1019 13:15:47.685607  485611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/client.crt ...
	I1019 13:15:47.685638  485611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/client.crt: {Name:mkbccb838549f1f87cffd774a53342e8ce836583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:15:47.685903  485611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/client.key ...
	I1019 13:15:47.685919  485611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/client.key: {Name:mk128314207ecf3cca665d607d4437ed612a47b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:15:47.686056  485611 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.key.21a79282
	I1019 13:15:47.686077  485611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.crt.21a79282 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1019 13:15:47.991490  485611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.crt.21a79282 ...
	I1019 13:15:47.991521  485611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.crt.21a79282: {Name:mk93b731584176dc4e3c875d0a3b0188cf141876 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:15:47.991734  485611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.key.21a79282 ...
	I1019 13:15:47.991751  485611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.key.21a79282: {Name:mkba25b2a83a02a27c18099aba13059d2c30977c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:15:47.991879  485611 certs.go:382] copying /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.crt.21a79282 -> /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.crt
	I1019 13:15:47.991980  485611 certs.go:386] copying /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.key.21a79282 -> /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.key
	I1019 13:15:47.992087  485611 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.key
	I1019 13:15:47.992134  485611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.crt with IP's: []
	I1019 13:15:48.644637  485611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.crt ...
	I1019 13:15:48.644665  485611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.crt: {Name:mk7e5a30f28b9e40781d76302eb1284fbe3bc598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:15:48.644809  485611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.key ...
	I1019 13:15:48.644824  485611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.key: {Name:mk68ae87e8be948598ff73e2bc8a6efe6a002635 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:15:48.644997  485611 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem (1338 bytes)
	W1019 13:15:48.645039  485611 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518_empty.pem, impossibly tiny 0 bytes
	I1019 13:15:48.645053  485611 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 13:15:48.645077  485611 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem (1082 bytes)
	I1019 13:15:48.645104  485611 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem (1123 bytes)
	I1019 13:15:48.645130  485611 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem (1679 bytes)
	I1019 13:15:48.645179  485611 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:15:48.645761  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 13:15:48.669549  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 13:15:48.699884  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 13:15:48.724134  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 13:15:48.759051  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1019 13:15:48.780782  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 13:15:48.798754  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 13:15:48.817716  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 13:15:48.835742  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 13:15:48.854172  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem --> /usr/share/ca-certificates/294518.pem (1338 bytes)
	I1019 13:15:48.882119  485611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /usr/share/ca-certificates/2945182.pem (1708 bytes)
	I1019 13:15:48.901201  485611 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 13:15:48.914760  485611 ssh_runner.go:195] Run: openssl version
	I1019 13:15:48.922206  485611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 13:15:48.931171  485611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:15:48.935927  485611 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:15:48.936025  485611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:15:48.978635  485611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 13:15:48.987536  485611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294518.pem && ln -fs /usr/share/ca-certificates/294518.pem /etc/ssl/certs/294518.pem"
	I1019 13:15:48.995940  485611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294518.pem
	I1019 13:15:49.000438  485611 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:20 /usr/share/ca-certificates/294518.pem
	I1019 13:15:49.000537  485611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294518.pem
	I1019 13:15:49.046117  485611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294518.pem /etc/ssl/certs/51391683.0"
	I1019 13:15:49.057791  485611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2945182.pem && ln -fs /usr/share/ca-certificates/2945182.pem /etc/ssl/certs/2945182.pem"
	I1019 13:15:49.071981  485611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2945182.pem
	I1019 13:15:49.076233  485611 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:20 /usr/share/ca-certificates/2945182.pem
	I1019 13:15:49.076341  485611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2945182.pem
	I1019 13:15:49.124014  485611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2945182.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 13:15:49.133163  485611 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 13:15:49.137851  485611 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 13:15:49.137962  485611 kubeadm.go:400] StartCluster: {Name:embed-certs-834340 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-834340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:15:49.138056  485611 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 13:15:49.138139  485611 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 13:15:49.172269  485611 cri.go:89] found id: ""
	I1019 13:15:49.172389  485611 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 13:15:49.191898  485611 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 13:15:49.199354  485611 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1019 13:15:49.199446  485611 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 13:15:49.215213  485611 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 13:15:49.215233  485611 kubeadm.go:157] found existing configuration files:
	
	I1019 13:15:49.215316  485611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 13:15:49.235195  485611 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 13:15:49.235296  485611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 13:15:49.250094  485611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 13:15:49.269745  485611 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 13:15:49.269840  485611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 13:15:49.279256  485611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 13:15:49.287102  485611 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 13:15:49.287204  485611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 13:15:49.294438  485611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 13:15:49.304934  485611 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 13:15:49.304995  485611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 13:15:49.313098  485611 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 13:15:49.361726  485611 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1019 13:15:49.362100  485611 kubeadm.go:318] [preflight] Running pre-flight checks
	I1019 13:15:49.402083  485611 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1019 13:15:49.402156  485611 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 13:15:49.402193  485611 kubeadm.go:318] OS: Linux
	I1019 13:15:49.402241  485611 kubeadm.go:318] CGROUPS_CPU: enabled
	I1019 13:15:49.402292  485611 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1019 13:15:49.402342  485611 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1019 13:15:49.402392  485611 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1019 13:15:49.402443  485611 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1019 13:15:49.402499  485611 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1019 13:15:49.402547  485611 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1019 13:15:49.402597  485611 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1019 13:15:49.402645  485611 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1019 13:15:49.493062  485611 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 13:15:49.493230  485611 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 13:15:49.493351  485611 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 13:15:49.501160  485611 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1019 13:15:46.368681  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	W1019 13:15:48.369001  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	I1019 13:15:49.506747  485611 out.go:252]   - Generating certificates and keys ...
	I1019 13:15:49.506907  485611 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1019 13:15:49.507018  485611 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1019 13:15:50.152484  485611 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 13:15:50.698709  485611 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	W1019 13:15:50.869429  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	W1019 13:15:52.870337  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	W1019 13:15:54.871338  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	I1019 13:15:51.147081  485611 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1019 13:15:51.298849  485611 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1019 13:15:52.311108  485611 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1019 13:15:52.311467  485611 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-834340 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 13:15:52.673200  485611 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1019 13:15:52.673525  485611 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-834340 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 13:15:52.743627  485611 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 13:15:52.970401  485611 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 13:15:53.628887  485611 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1019 13:15:53.629159  485611 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 13:15:54.808810  485611 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 13:15:55.246139  485611 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 13:15:55.646238  485611 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 13:15:55.929305  485611 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 13:15:56.070194  485611 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 13:15:56.070730  485611 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 13:15:56.075468  485611 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1019 13:15:57.369185  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	W1019 13:15:59.868742  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	I1019 13:15:56.078897  485611 out.go:252]   - Booting up control plane ...
	I1019 13:15:56.079004  485611 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 13:15:56.079086  485611 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 13:15:56.079828  485611 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 13:15:56.105296  485611 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 13:15:56.105420  485611 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 13:15:56.113477  485611 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 13:15:56.113866  485611 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 13:15:56.113916  485611 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1019 13:15:56.244599  485611 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 13:15:56.244725  485611 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 13:15:58.246317  485611 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.00181996s
	I1019 13:15:58.249813  485611 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 13:15:58.249913  485611 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1019 13:15:58.250286  485611 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 13:15:58.250383  485611 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1019 13:16:01.869124  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	W1019 13:16:03.869407  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	I1019 13:16:02.728134  485611 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.477808215s
	I1019 13:16:04.217697  485611 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.967885333s
	I1019 13:16:05.752625  485611 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.502456819s
	I1019 13:16:05.779253  485611 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 13:16:05.795422  485611 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 13:16:05.819432  485611 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 13:16:05.819794  485611 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-834340 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 13:16:05.833484  485611 kubeadm.go:318] [bootstrap-token] Using token: upnd0k.hdge3z3mcruoqygz
	I1019 13:16:05.836375  485611 out.go:252]   - Configuring RBAC rules ...
	I1019 13:16:05.836508  485611 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 13:16:05.840975  485611 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 13:16:05.850941  485611 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 13:16:05.855240  485611 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 13:16:05.861656  485611 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 13:16:05.872684  485611 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 13:16:06.163281  485611 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 13:16:06.614161  485611 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1019 13:16:07.160392  485611 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1019 13:16:07.161843  485611 kubeadm.go:318] 
	I1019 13:16:07.161923  485611 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1019 13:16:07.161929  485611 kubeadm.go:318] 
	I1019 13:16:07.162010  485611 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1019 13:16:07.162015  485611 kubeadm.go:318] 
	I1019 13:16:07.162041  485611 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1019 13:16:07.162109  485611 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 13:16:07.162162  485611 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 13:16:07.162166  485611 kubeadm.go:318] 
	I1019 13:16:07.162222  485611 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1019 13:16:07.162227  485611 kubeadm.go:318] 
	I1019 13:16:07.162277  485611 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 13:16:07.162282  485611 kubeadm.go:318] 
	I1019 13:16:07.162336  485611 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1019 13:16:07.162415  485611 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 13:16:07.162486  485611 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 13:16:07.162493  485611 kubeadm.go:318] 
	I1019 13:16:07.162581  485611 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 13:16:07.162662  485611 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1019 13:16:07.162667  485611 kubeadm.go:318] 
	I1019 13:16:07.162755  485611 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token upnd0k.hdge3z3mcruoqygz \
	I1019 13:16:07.162863  485611 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0ee0bbb0fbfe8419c71683408bd38502dbf921f3cb179cb0365daeb92f967309 \
	I1019 13:16:07.162885  485611 kubeadm.go:318] 	--control-plane 
	I1019 13:16:07.162889  485611 kubeadm.go:318] 
	I1019 13:16:07.162978  485611 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1019 13:16:07.162982  485611 kubeadm.go:318] 
	I1019 13:16:07.163067  485611 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token upnd0k.hdge3z3mcruoqygz \
	I1019 13:16:07.163200  485611 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0ee0bbb0fbfe8419c71683408bd38502dbf921f3cb179cb0365daeb92f967309 
	I1019 13:16:07.166668  485611 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1019 13:16:07.166910  485611 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1019 13:16:07.167026  485611 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 13:16:07.167046  485611 cni.go:84] Creating CNI manager for ""
	I1019 13:16:07.167057  485611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:16:07.172184  485611 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1019 13:16:06.370227  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	W1019 13:16:08.868219  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	I1019 13:16:07.175123  485611 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 13:16:07.181463  485611 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 13:16:07.181486  485611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 13:16:07.205501  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 13:16:07.648571  485611 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 13:16:07.648639  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:07.648715  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-834340 minikube.k8s.io/updated_at=2025_10_19T13_16_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99 minikube.k8s.io/name=embed-certs-834340 minikube.k8s.io/primary=true
	I1019 13:16:07.838122  485611 ops.go:34] apiserver oom_adj: -16
	I1019 13:16:07.838231  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:08.338839  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:08.838862  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:09.339202  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:09.838879  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:10.339157  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:10.838612  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:11.339187  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:11.838463  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:12.339246  485611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:16:12.493646  485611 kubeadm.go:1113] duration metric: took 4.845059862s to wait for elevateKubeSystemPrivileges
	I1019 13:16:12.493724  485611 kubeadm.go:402] duration metric: took 23.35578878s to StartCluster
	I1019 13:16:12.493744  485611 settings.go:142] acquiring lock: {Name:mk1099ab6cbf86eca031b5f8e2b43952c9c0f84f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:16:12.493807  485611 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:16:12.495250  485611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:16:12.495480  485611 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:16:12.495574  485611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 13:16:12.495873  485611 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 13:16:12.495962  485611 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-834340"
	I1019 13:16:12.495977  485611 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-834340"
	I1019 13:16:12.496003  485611 host.go:66] Checking if "embed-certs-834340" exists ...
	I1019 13:16:12.496545  485611 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:16:12.497048  485611 config.go:182] Loaded profile config "embed-certs-834340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:16:12.497104  485611 addons.go:69] Setting default-storageclass=true in profile "embed-certs-834340"
	I1019 13:16:12.497134  485611 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-834340"
	I1019 13:16:12.497419  485611 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:16:12.500210  485611 out.go:179] * Verifying Kubernetes components...
	I1019 13:16:12.509970  485611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:16:12.538263  485611 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 13:16:12.544161  485611 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:16:12.544191  485611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 13:16:12.544271  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:16:12.544500  485611 addons.go:238] Setting addon default-storageclass=true in "embed-certs-834340"
	I1019 13:16:12.544534  485611 host.go:66] Checking if "embed-certs-834340" exists ...
	I1019 13:16:12.544958  485611 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:16:12.575164  485611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:16:12.590089  485611 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 13:16:12.590139  485611 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 13:16:12.590203  485611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:16:12.628224  485611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:16:12.854755  485611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:16:12.943080  485611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 13:16:12.943191  485611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:16:12.989992  485611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 13:16:13.648655  485611 node_ready.go:35] waiting up to 6m0s for node "embed-certs-834340" to be "Ready" ...
	I1019 13:16:13.649000  485611 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1019 13:16:13.695150  485611 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1019 13:16:10.868706  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	W1019 13:16:13.368241  482757 pod_ready.go:104] pod "coredns-66bc5c9577-qp7k5" is not "Ready", error: <nil>
	I1019 13:16:14.867796  482757 pod_ready.go:94] pod "coredns-66bc5c9577-qp7k5" is "Ready"
	I1019 13:16:14.867825  482757 pod_ready.go:86] duration metric: took 40.005661779s for pod "coredns-66bc5c9577-qp7k5" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:14.870555  482757 pod_ready.go:83] waiting for pod "etcd-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:14.874977  482757 pod_ready.go:94] pod "etcd-no-preload-108149" is "Ready"
	I1019 13:16:14.875006  482757 pod_ready.go:86] duration metric: took 4.423017ms for pod "etcd-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:14.877179  482757 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:14.881744  482757 pod_ready.go:94] pod "kube-apiserver-no-preload-108149" is "Ready"
	I1019 13:16:14.881772  482757 pod_ready.go:86] duration metric: took 4.565855ms for pod "kube-apiserver-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:14.884092  482757 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:15.067143  482757 pod_ready.go:94] pod "kube-controller-manager-no-preload-108149" is "Ready"
	I1019 13:16:15.067175  482757 pod_ready.go:86] duration metric: took 183.057745ms for pod "kube-controller-manager-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:15.266313  482757 pod_ready.go:83] waiting for pod "kube-proxy-qfr27" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:15.666804  482757 pod_ready.go:94] pod "kube-proxy-qfr27" is "Ready"
	I1019 13:16:15.666832  482757 pod_ready.go:86] duration metric: took 400.49093ms for pod "kube-proxy-qfr27" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:13.698020  485611 addons.go:514] duration metric: took 1.202130688s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 13:16:14.154783  485611 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-834340" context rescaled to 1 replicas
	W1019 13:16:15.652505  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	I1019 13:16:15.866798  482757 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:16.266145  482757 pod_ready.go:94] pod "kube-scheduler-no-preload-108149" is "Ready"
	I1019 13:16:16.266173  482757 pod_ready.go:86] duration metric: took 399.34273ms for pod "kube-scheduler-no-preload-108149" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:16.266187  482757 pod_ready.go:40] duration metric: took 41.407951691s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:16:16.337918  482757 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 13:16:16.341422  482757 out.go:179] * Done! kubectl is now configured to use "no-preload-108149" cluster and "default" namespace by default
	W1019 13:16:18.151452  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	W1019 13:16:20.151637  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	W1019 13:16:22.154454  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	W1019 13:16:24.651560  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	W1019 13:16:27.151602  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	W1019 13:16:29.152908  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 19 13:16:10 no-preload-108149 crio[651]: time="2025-10-19T13:16:10.620423712Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e2a5478e-05ce-49ed-87b0-cf4abfb22bb1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:16:10 no-preload-108149 crio[651]: time="2025-10-19T13:16:10.621388082Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3cb44831-3c2c-4dd8-9c1e-fabbd699ba77 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:16:10 no-preload-108149 crio[651]: time="2025-10-19T13:16:10.622469622Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w/dashboard-metrics-scraper" id=7ced63a7-0eed-401e-8234-f25a42e0f19a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:16:10 no-preload-108149 crio[651]: time="2025-10-19T13:16:10.622755099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:16:10 no-preload-108149 crio[651]: time="2025-10-19T13:16:10.631011998Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:16:10 no-preload-108149 crio[651]: time="2025-10-19T13:16:10.631705349Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:16:10 no-preload-108149 crio[651]: time="2025-10-19T13:16:10.647211913Z" level=info msg="Created container e89abbf84a6ae8fad71347e209ff96a8ac6de8edccf16176b6ff8c53cdf3116b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w/dashboard-metrics-scraper" id=7ced63a7-0eed-401e-8234-f25a42e0f19a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:16:10 no-preload-108149 crio[651]: time="2025-10-19T13:16:10.648325593Z" level=info msg="Starting container: e89abbf84a6ae8fad71347e209ff96a8ac6de8edccf16176b6ff8c53cdf3116b" id=03407226-2dbd-421a-ac21-d822d87f01a0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:16:10 no-preload-108149 crio[651]: time="2025-10-19T13:16:10.65019592Z" level=info msg="Started container" PID=1640 containerID=e89abbf84a6ae8fad71347e209ff96a8ac6de8edccf16176b6ff8c53cdf3116b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w/dashboard-metrics-scraper id=03407226-2dbd-421a-ac21-d822d87f01a0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b926023c2cf0cfe55827f1b70f842c51643b4b2ba9d8f31e9aee0dd12b634a4e
	Oct 19 13:16:10 no-preload-108149 conmon[1638]: conmon e89abbf84a6ae8fad713 <ninfo>: container 1640 exited with status 1
	Oct 19 13:16:11 no-preload-108149 crio[651]: time="2025-10-19T13:16:11.001301254Z" level=info msg="Removing container: 5f998b3bc01b24989030bccd441a62269079dd4cb5f1c38114a640ce6c52cdb9" id=1fd7fd20-0bef-4fba-8dcc-5a721826e768 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:16:11 no-preload-108149 crio[651]: time="2025-10-19T13:16:11.010631161Z" level=info msg="Error loading conmon cgroup of container 5f998b3bc01b24989030bccd441a62269079dd4cb5f1c38114a640ce6c52cdb9: cgroup deleted" id=1fd7fd20-0bef-4fba-8dcc-5a721826e768 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:16:11 no-preload-108149 crio[651]: time="2025-10-19T13:16:11.014539669Z" level=info msg="Removed container 5f998b3bc01b24989030bccd441a62269079dd4cb5f1c38114a640ce6c52cdb9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w/dashboard-metrics-scraper" id=1fd7fd20-0bef-4fba-8dcc-5a721826e768 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.908884343Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.914605807Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.914643018Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.91466815Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.919587207Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.919778947Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.919863822Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.934299521Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.934337782Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.934362799Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.939145009Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:16:13 no-preload-108149 crio[651]: time="2025-10-19T13:16:13.939182647Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	e89abbf84a6ae       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   b926023c2cf0c       dashboard-metrics-scraper-6ffb444bf9-lrg9w   kubernetes-dashboard
	6643d7449e536       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           29 seconds ago       Running             storage-provisioner         2                   70549ea478aaa       storage-provisioner                          kube-system
	75b1666aca773       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   50 seconds ago       Running             kubernetes-dashboard        0                   dc1cfda0db32d       kubernetes-dashboard-855c9754f9-8wvh6        kubernetes-dashboard
	35a081210b7fa       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   46118d1f936a3       kindnet-s5wgc                                kube-system
	3602ce3b8d0b4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   0b93268a36818       coredns-66bc5c9577-qp7k5                     kube-system
	e15c1f7380a9f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   47e281aa26f2f       busybox                                      default
	d7af5087f11ac       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   fa142006911e8       kube-proxy-qfr27                             kube-system
	f06654b2d2683       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           About a minute ago   Exited              storage-provisioner         1                   70549ea478aaa       storage-provisioner                          kube-system
	0452bd1f37844       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   f58a9c335771b       kube-controller-manager-no-preload-108149    kube-system
	24a75ddccb641       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   a60cbf729c4f7       kube-scheduler-no-preload-108149             kube-system
	b649715b02d1c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   1dc9fe6ebc265       kube-apiserver-no-preload-108149             kube-system
	69cd340c87d96       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   972392ac13c9e       etcd-no-preload-108149                       kube-system
	
	
	==> coredns [3602ce3b8d0b42a07e319435c2d257a4f4c245eb0405e0ad593bf94803f45907] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58333 - 31132 "HINFO IN 2288926805526552410.6672143865960942580. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03362048s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-108149
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-108149
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=no-preload-108149
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T13_14_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 13:14:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-108149
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 13:16:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 13:16:03 +0000   Sun, 19 Oct 2025 13:14:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 13:16:03 +0000   Sun, 19 Oct 2025 13:14:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 13:16:03 +0000   Sun, 19 Oct 2025 13:14:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 13:16:03 +0000   Sun, 19 Oct 2025 13:14:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-108149
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                a4d8c0d2-63fb-4a48-994a-8850e6b21b64
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 coredns-66bc5c9577-qp7k5                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m3s
	  kube-system                 etcd-no-preload-108149                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m11s
	  kube-system                 kindnet-s5wgc                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m3s
	  kube-system                 kube-apiserver-no-preload-108149              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-controller-manager-no-preload-108149     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-proxy-qfr27                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-scheduler-no-preload-108149              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-lrg9w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8wvh6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m                     kube-proxy       
	  Normal   Starting                 59s                    kube-proxy       
	  Warning  CgroupV1                 2m20s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m20s (x8 over 2m20s)  kubelet          Node no-preload-108149 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m20s (x8 over 2m20s)  kubelet          Node no-preload-108149 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m20s (x8 over 2m20s)  kubelet          Node no-preload-108149 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m7s                   kubelet          Node no-preload-108149 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m7s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m7s                   kubelet          Node no-preload-108149 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m7s                   kubelet          Node no-preload-108149 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m7s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m4s                   node-controller  Node no-preload-108149 event: Registered Node no-preload-108149 in Controller
	  Normal   NodeReady                108s                   kubelet          Node no-preload-108149 status is now: NodeReady
	  Normal   Starting                 71s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 71s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)      kubelet          Node no-preload-108149 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)      kubelet          Node no-preload-108149 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 71s)      kubelet          Node no-preload-108149 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           59s                    node-controller  Node no-preload-108149 event: Registered Node no-preload-108149 in Controller
	
	
	==> dmesg <==
	[Oct19 12:52] overlayfs: idmapped layers are currently not supported
	[Oct19 12:53] overlayfs: idmapped layers are currently not supported
	[Oct19 12:54] overlayfs: idmapped layers are currently not supported
	[Oct19 12:56] overlayfs: idmapped layers are currently not supported
	[ +16.315179] overlayfs: idmapped layers are currently not supported
	[ +11.914063] overlayfs: idmapped layers are currently not supported
	[Oct19 12:57] overlayfs: idmapped layers are currently not supported
	[Oct19 12:58] overlayfs: idmapped layers are currently not supported
	[ +48.481184] overlayfs: idmapped layers are currently not supported
	[Oct19 12:59] overlayfs: idmapped layers are currently not supported
	[Oct19 13:00] overlayfs: idmapped layers are currently not supported
	[Oct19 13:01] overlayfs: idmapped layers are currently not supported
	[Oct19 13:04] overlayfs: idmapped layers are currently not supported
	[Oct19 13:05] overlayfs: idmapped layers are currently not supported
	[Oct19 13:06] overlayfs: idmapped layers are currently not supported
	[Oct19 13:08] overlayfs: idmapped layers are currently not supported
	[ +38.759554] overlayfs: idmapped layers are currently not supported
	[Oct19 13:10] overlayfs: idmapped layers are currently not supported
	[Oct19 13:11] overlayfs: idmapped layers are currently not supported
	[Oct19 13:12] overlayfs: idmapped layers are currently not supported
	[ +39.991818] overlayfs: idmapped layers are currently not supported
	[Oct19 13:13] overlayfs: idmapped layers are currently not supported
	[Oct19 13:14] overlayfs: idmapped layers are currently not supported
	[Oct19 13:15] overlayfs: idmapped layers are currently not supported
	[ +34.413925] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [69cd340c87d966c00eb54338c8930e6a5166ffc684c24d32e2f7db4bde1a9182] <==
	{"level":"warn","ts":"2025-10-19T13:15:29.932206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:29.955463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.016733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.178155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.190243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.401950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.423859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.466831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.489413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.518188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.547910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.583075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.609488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.651668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.733812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.751926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.763422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.784258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.801258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.830056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.856678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.900440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.926104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:30.958464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:15:31.090007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43694","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:16:34 up  2:59,  0 user,  load average: 3.62, 3.28, 2.77
	Linux no-preload-108149 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [35a081210b7fa08acbe3227adf5610734dfa60738cda733fc91359b203bcf29b] <==
	I1019 13:15:33.706258       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:15:33.706449       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 13:15:33.706571       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:15:33.706583       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:15:33.706594       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:15:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:15:33.907476       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:15:33.907534       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:15:33.907588       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:15:33.908728       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 13:16:03.908379       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 13:16:03.908511       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 13:16:03.908636       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 13:16:03.908769       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1019 13:16:05.408262       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 13:16:05.408407       1 metrics.go:72] Registering metrics
	I1019 13:16:05.408511       1 controller.go:711] "Syncing nftables rules"
	I1019 13:16:13.908531       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:16:13.908623       1 main.go:301] handling current node
	I1019 13:16:23.909832       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:16:23.909902       1 main.go:301] handling current node
	I1019 13:16:33.913967       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:16:33.913998       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b649715b02d1cdf3f028d00c9f1eda59d4501cabfe3bf7e05ad588e094515f85] <==
	I1019 13:15:32.226285       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 13:15:32.243625       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:15:32.244923       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1019 13:15:32.244978       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 13:15:32.256973       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 13:15:32.257000       1 policy_source.go:240] refreshing policies
	E1019 13:15:32.276010       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 13:15:32.279894       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 13:15:32.280446       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 13:15:32.323698       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1019 13:15:32.323758       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 13:15:32.332505       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 13:15:32.349936       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 13:15:32.349956       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 13:15:32.499917       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 13:15:32.957641       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 13:15:34.231405       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 13:15:34.381388       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 13:15:34.432954       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 13:15:34.448140       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 13:15:34.699418       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.198.68"}
	I1019 13:15:34.735749       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.49.242"}
	I1019 13:15:35.755119       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 13:15:36.148897       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 13:15:36.278782       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0452bd1f37844e20d71713464f7c02412906aa5aeab0336266163b06aba35d56] <==
	I1019 13:15:35.705243       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:15:35.708039       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 13:15:35.708061       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 13:15:35.710179       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 13:15:35.705287       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 13:15:35.705299       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 13:15:35.705253       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 13:15:35.705269       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 13:15:35.705278       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 13:15:35.716908       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 13:15:35.717185       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 13:15:35.717263       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 13:15:35.717313       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 13:15:35.718060       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:15:35.721797       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 13:15:35.721893       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 13:15:35.722283       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 13:15:35.737135       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 13:15:35.743612       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 13:15:35.746398       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 13:15:35.766677       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 13:15:35.766797       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:15:35.770431       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:15:35.788338       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 13:15:35.788398       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [d7af5087f11ac0a282a7c09f5c3f2ad9affeab8823717f75f713a854c8124884] <==
	I1019 13:15:34.119283       1 server_linux.go:53] "Using iptables proxy"
	I1019 13:15:34.275176       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 13:15:34.380250       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 13:15:34.380297       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 13:15:34.380362       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 13:15:34.546683       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:15:34.546812       1 server_linux.go:132] "Using iptables Proxier"
	I1019 13:15:34.552008       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 13:15:34.552383       1 server.go:527] "Version info" version="v1.34.1"
	I1019 13:15:34.552598       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:15:34.562556       1 config.go:200] "Starting service config controller"
	I1019 13:15:34.562591       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 13:15:34.562615       1 config.go:106] "Starting endpoint slice config controller"
	I1019 13:15:34.562620       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 13:15:34.562632       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 13:15:34.562636       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 13:15:34.567806       1 config.go:309] "Starting node config controller"
	I1019 13:15:34.567823       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 13:15:34.567830       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 13:15:34.663704       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 13:15:34.663749       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 13:15:34.792517       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [24a75ddccb641f284753e265035d0ec049f86894b9a8bb4c8eb68267f2a6bbd3] <==
	I1019 13:15:26.503303       1 serving.go:386] Generated self-signed cert in-memory
	W1019 13:15:32.160159       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 13:15:32.160191       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 13:15:32.160201       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 13:15:32.160211       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 13:15:32.270168       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 13:15:32.270291       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:15:32.279451       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 13:15:32.282545       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 13:15:32.282573       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:15:32.331079       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:15:32.441810       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 13:15:33 no-preload-108149 kubelet[767]: W1019 13:15:33.045779     767 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0/crio-46118d1f936a3073a1759f158849143bcff5bad3532c932be2b41f40f9bbe7a1 WatchSource:0}: Error finding container 46118d1f936a3073a1759f158849143bcff5bad3532c932be2b41f40f9bbe7a1: Status 404 returned error can't find the container with id 46118d1f936a3073a1759f158849143bcff5bad3532c932be2b41f40f9bbe7a1
	Oct 19 13:15:36 no-preload-108149 kubelet[767]: I1019 13:15:36.369971     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wklg4\" (UniqueName: \"kubernetes.io/projected/20bcb516-2b35-4c2e-af84-2110a56382b9-kube-api-access-wklg4\") pod \"dashboard-metrics-scraper-6ffb444bf9-lrg9w\" (UID: \"20bcb516-2b35-4c2e-af84-2110a56382b9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w"
	Oct 19 13:15:36 no-preload-108149 kubelet[767]: I1019 13:15:36.370036     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5mfm\" (UniqueName: \"kubernetes.io/projected/1e8b4000-201a-4e13-a3ec-4b0799d1f3cd-kube-api-access-c5mfm\") pod \"kubernetes-dashboard-855c9754f9-8wvh6\" (UID: \"1e8b4000-201a-4e13-a3ec-4b0799d1f3cd\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8wvh6"
	Oct 19 13:15:36 no-preload-108149 kubelet[767]: I1019 13:15:36.370074     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1e8b4000-201a-4e13-a3ec-4b0799d1f3cd-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-8wvh6\" (UID: \"1e8b4000-201a-4e13-a3ec-4b0799d1f3cd\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8wvh6"
	Oct 19 13:15:36 no-preload-108149 kubelet[767]: I1019 13:15:36.370095     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/20bcb516-2b35-4c2e-af84-2110a56382b9-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-lrg9w\" (UID: \"20bcb516-2b35-4c2e-af84-2110a56382b9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w"
	Oct 19 13:15:36 no-preload-108149 kubelet[767]: W1019 13:15:36.634457     767 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/4857474c82b9a613604d363560d900cabf323a11115f5034cef7d8b100e506f0/crio-b926023c2cf0cfe55827f1b70f842c51643b4b2ba9d8f31e9aee0dd12b634a4e WatchSource:0}: Error finding container b926023c2cf0cfe55827f1b70f842c51643b4b2ba9d8f31e9aee0dd12b634a4e: Status 404 returned error can't find the container with id b926023c2cf0cfe55827f1b70f842c51643b4b2ba9d8f31e9aee0dd12b634a4e
	Oct 19 13:15:49 no-preload-108149 kubelet[767]: I1019 13:15:49.936482     767 scope.go:117] "RemoveContainer" containerID="1fe8e0af5771f032baab83ed8cf4f208ff0d3ba37df65f7ce007aae30ca71716"
	Oct 19 13:15:49 no-preload-108149 kubelet[767]: I1019 13:15:49.975504     767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8wvh6" podStartSLOduration=7.132295523 podStartE2EDuration="13.975358366s" podCreationTimestamp="2025-10-19 13:15:36 +0000 UTC" firstStartedPulling="2025-10-19 13:15:36.609273399 +0000 UTC m=+13.344460867" lastFinishedPulling="2025-10-19 13:15:43.452336241 +0000 UTC m=+20.187523710" observedRunningTime="2025-10-19 13:15:43.925344937 +0000 UTC m=+20.660532414" watchObservedRunningTime="2025-10-19 13:15:49.975358366 +0000 UTC m=+26.710545834"
	Oct 19 13:15:50 no-preload-108149 kubelet[767]: I1019 13:15:50.941661     767 scope.go:117] "RemoveContainer" containerID="5f998b3bc01b24989030bccd441a62269079dd4cb5f1c38114a640ce6c52cdb9"
	Oct 19 13:15:50 no-preload-108149 kubelet[767]: E1019 13:15:50.942285     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrg9w_kubernetes-dashboard(20bcb516-2b35-4c2e-af84-2110a56382b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w" podUID="20bcb516-2b35-4c2e-af84-2110a56382b9"
	Oct 19 13:15:50 no-preload-108149 kubelet[767]: I1019 13:15:50.942900     767 scope.go:117] "RemoveContainer" containerID="1fe8e0af5771f032baab83ed8cf4f208ff0d3ba37df65f7ce007aae30ca71716"
	Oct 19 13:15:51 no-preload-108149 kubelet[767]: I1019 13:15:51.945457     767 scope.go:117] "RemoveContainer" containerID="5f998b3bc01b24989030bccd441a62269079dd4cb5f1c38114a640ce6c52cdb9"
	Oct 19 13:15:51 no-preload-108149 kubelet[767]: E1019 13:15:51.945619     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrg9w_kubernetes-dashboard(20bcb516-2b35-4c2e-af84-2110a56382b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w" podUID="20bcb516-2b35-4c2e-af84-2110a56382b9"
	Oct 19 13:15:56 no-preload-108149 kubelet[767]: I1019 13:15:56.585916     767 scope.go:117] "RemoveContainer" containerID="5f998b3bc01b24989030bccd441a62269079dd4cb5f1c38114a640ce6c52cdb9"
	Oct 19 13:15:56 no-preload-108149 kubelet[767]: E1019 13:15:56.586121     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrg9w_kubernetes-dashboard(20bcb516-2b35-4c2e-af84-2110a56382b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w" podUID="20bcb516-2b35-4c2e-af84-2110a56382b9"
	Oct 19 13:16:03 no-preload-108149 kubelet[767]: I1019 13:16:03.976701     767 scope.go:117] "RemoveContainer" containerID="f06654b2d2683ec240f70fa86e309b5a103311a29fb5afb2f214482a14902133"
	Oct 19 13:16:10 no-preload-108149 kubelet[767]: I1019 13:16:10.619712     767 scope.go:117] "RemoveContainer" containerID="5f998b3bc01b24989030bccd441a62269079dd4cb5f1c38114a640ce6c52cdb9"
	Oct 19 13:16:10 no-preload-108149 kubelet[767]: I1019 13:16:10.998750     767 scope.go:117] "RemoveContainer" containerID="5f998b3bc01b24989030bccd441a62269079dd4cb5f1c38114a640ce6c52cdb9"
	Oct 19 13:16:10 no-preload-108149 kubelet[767]: I1019 13:16:10.999049     767 scope.go:117] "RemoveContainer" containerID="e89abbf84a6ae8fad71347e209ff96a8ac6de8edccf16176b6ff8c53cdf3116b"
	Oct 19 13:16:10 no-preload-108149 kubelet[767]: E1019 13:16:10.999205     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrg9w_kubernetes-dashboard(20bcb516-2b35-4c2e-af84-2110a56382b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w" podUID="20bcb516-2b35-4c2e-af84-2110a56382b9"
	Oct 19 13:16:16 no-preload-108149 kubelet[767]: I1019 13:16:16.586274     767 scope.go:117] "RemoveContainer" containerID="e89abbf84a6ae8fad71347e209ff96a8ac6de8edccf16176b6ff8c53cdf3116b"
	Oct 19 13:16:16 no-preload-108149 kubelet[767]: E1019 13:16:16.586451     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrg9w_kubernetes-dashboard(20bcb516-2b35-4c2e-af84-2110a56382b9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrg9w" podUID="20bcb516-2b35-4c2e-af84-2110a56382b9"
	Oct 19 13:16:28 no-preload-108149 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 13:16:28 no-preload-108149 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 13:16:28 no-preload-108149 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [75b1666aca773065101164715baec4b2ea6e97910e9b1b816056fe57b3894d8b] <==
	2025/10/19 13:15:43 Using namespace: kubernetes-dashboard
	2025/10/19 13:15:43 Using in-cluster config to connect to apiserver
	2025/10/19 13:15:43 Using secret token for csrf signing
	2025/10/19 13:15:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 13:15:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 13:15:43 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 13:15:43 Generating JWE encryption key
	2025/10/19 13:15:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 13:15:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 13:15:44 Initializing JWE encryption key from synchronized object
	2025/10/19 13:15:44 Creating in-cluster Sidecar client
	2025/10/19 13:15:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 13:15:44 Serving insecurely on HTTP port: 9090
	2025/10/19 13:16:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 13:15:43 Starting overwatch
	
	
	==> storage-provisioner [6643d7449e536bebc7c48cb509e939d206eca0e67efbecc9a49f6f230d6a8f2e] <==
	W1019 13:16:04.087976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:07.549796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:11.810617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:15.410941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:18.464681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:21.486478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:21.491433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 13:16:21.491587       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 13:16:21.491770       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-108149_d7617f91-b828-455b-aa0b-eeb97a558d7e!
	I1019 13:16:21.492604       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a5ba3f0-b17e-4468-873b-e2df26dbba12", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-108149_d7617f91-b828-455b-aa0b-eeb97a558d7e became leader
	W1019 13:16:21.496761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:21.501915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 13:16:21.592168       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-108149_d7617f91-b828-455b-aa0b-eeb97a558d7e!
	W1019 13:16:23.504769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:23.509591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:25.512779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:25.519465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:27.522496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:27.526898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:29.530352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:29.534831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:31.538641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:31.549443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:33.552472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:33.562986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f06654b2d2683ec240f70fa86e309b5a103311a29fb5afb2f214482a14902133] <==
	I1019 13:15:33.882215       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 13:16:03.884702       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-108149 -n no-preload-108149
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-108149 -n no-preload-108149: exit status 2 (389.148484ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-108149 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-834340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-834340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (264.013757ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:17:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-834340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-834340 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-834340 describe deploy/metrics-server -n kube-system: exit status 1 (86.258735ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-834340 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-834340
helpers_test.go:243: (dbg) docker inspect embed-certs-834340:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59",
	        "Created": "2025-10-19T13:15:37.885260353Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 486334,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T13:15:37.958549617Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/hostname",
	        "HostsPath": "/var/lib/docker/containers/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/hosts",
	        "LogPath": "/var/lib/docker/containers/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59-json.log",
	        "Name": "/embed-certs-834340",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-834340:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-834340",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59",
	                "LowerDir": "/var/lib/docker/overlay2/fd9e9f7bbe80ae9f84f50f65044e2fc095d54180303dacdaaf2af69ede890f60-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fd9e9f7bbe80ae9f84f50f65044e2fc095d54180303dacdaaf2af69ede890f60/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fd9e9f7bbe80ae9f84f50f65044e2fc095d54180303dacdaaf2af69ede890f60/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fd9e9f7bbe80ae9f84f50f65044e2fc095d54180303dacdaaf2af69ede890f60/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-834340",
	                "Source": "/var/lib/docker/volumes/embed-certs-834340/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-834340",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-834340",
	                "name.minikube.sigs.k8s.io": "embed-certs-834340",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f0b92c5fd2c8cd33292e92bda900c873b77930c46461457a8a8ea13404511733",
	            "SandboxKey": "/var/run/docker/netns/f0b92c5fd2c8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-834340": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:6d:19:4f:e2:5a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4736119f136360f6c549379b3521579c84fb2cab47b61b166d29a201ac636c1c",
	                    "EndpointID": "58518524620d555277cfe2879e2f731000108bd0fe2cfa1b9c885fb5b5c63ff7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-834340",
	                        "9a5cfef083e8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-834340 -n embed-certs-834340
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-834340 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-834340 logs -n 25: (1.473887099s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-264135                                                                                                                                                                                                                        │ cert-options-264135          │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:12 UTC │
	│ start   │ -p old-k8s-version-842494 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:12 UTC │ 19 Oct 25 13:13 UTC │
	│ start   │ -p cert-expiration-088393 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-088393       │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:13 UTC │
	│ delete  │ -p cert-expiration-088393                                                                                                                                                                                                                     │ cert-expiration-088393       │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:13 UTC │
	│ start   │ -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-842494 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │                     │
	│ stop    │ -p old-k8s-version-842494 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-842494 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:14 UTC │ 19 Oct 25 13:14 UTC │
	│ start   │ -p old-k8s-version-842494 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:14 UTC │ 19 Oct 25 13:15 UTC │
	│ addons  │ enable metrics-server -p no-preload-108149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ stop    │ -p no-preload-108149 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ addons  │ enable dashboard -p no-preload-108149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ start   │ -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:16 UTC │
	│ image   │ old-k8s-version-842494 image list --format=json                                                                                                                                                                                               │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ pause   │ -p old-k8s-version-842494 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ delete  │ -p old-k8s-version-842494                                                                                                                                                                                                                     │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ delete  │ -p old-k8s-version-842494                                                                                                                                                                                                                     │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ start   │ -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:16 UTC │
	│ image   │ no-preload-108149 image list --format=json                                                                                                                                                                                                    │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ pause   │ -p no-preload-108149 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │                     │
	│ delete  │ -p no-preload-108149                                                                                                                                                                                                                          │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ delete  │ -p no-preload-108149                                                                                                                                                                                                                          │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ delete  │ -p disable-driver-mounts-418719                                                                                                                                                                                                               │ disable-driver-mounts-418719 │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ start   │ -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-834340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 13:16:38
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 13:16:38.586023  490179 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:16:38.586162  490179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:16:38.586174  490179 out.go:374] Setting ErrFile to fd 2...
	I1019 13:16:38.586192  490179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:16:38.586485  490179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:16:38.586927  490179 out.go:368] Setting JSON to false
	I1019 13:16:38.587927  490179 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10749,"bootTime":1760869050,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 13:16:38.587997  490179 start.go:141] virtualization:  
	I1019 13:16:38.591969  490179 out.go:179] * [default-k8s-diff-port-455348] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 13:16:38.596164  490179 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:16:38.596253  490179 notify.go:220] Checking for updates...
	I1019 13:16:38.602371  490179 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:16:38.605454  490179 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:16:38.608759  490179 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 13:16:38.611684  490179 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 13:16:38.614647  490179 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:16:38.618269  490179 config.go:182] Loaded profile config "embed-certs-834340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:16:38.618379  490179 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:16:38.640953  490179 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 13:16:38.641080  490179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:16:38.701726  490179 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:16:38.691031878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:16:38.701839  490179 docker.go:318] overlay module found
	I1019 13:16:38.705026  490179 out.go:179] * Using the docker driver based on user configuration
	I1019 13:16:38.707989  490179 start.go:305] selected driver: docker
	I1019 13:16:38.708011  490179 start.go:925] validating driver "docker" against <nil>
	I1019 13:16:38.708032  490179 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:16:38.708779  490179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:16:38.766085  490179 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:16:38.75689365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:16:38.766241  490179 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 13:16:38.766477  490179 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:16:38.769427  490179 out.go:179] * Using Docker driver with root privileges
	I1019 13:16:38.772321  490179 cni.go:84] Creating CNI manager for ""
	I1019 13:16:38.772393  490179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:16:38.772404  490179 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 13:16:38.772487  490179 start.go:349] cluster config:
	{Name:default-k8s-diff-port-455348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-455348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:16:38.775745  490179 out.go:179] * Starting "default-k8s-diff-port-455348" primary control-plane node in "default-k8s-diff-port-455348" cluster
	I1019 13:16:38.778578  490179 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 13:16:38.781495  490179 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 13:16:38.784304  490179 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:16:38.784330  490179 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 13:16:38.784353  490179 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 13:16:38.784362  490179 cache.go:58] Caching tarball of preloaded images
	I1019 13:16:38.784444  490179 preload.go:233] Found /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 13:16:38.784454  490179 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 13:16:38.784560  490179 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/config.json ...
	I1019 13:16:38.784577  490179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/config.json: {Name:mkd779a008b116054ee965cce70a4d38a1715d97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:16:38.804402  490179 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 13:16:38.804423  490179 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 13:16:38.804436  490179 cache.go:232] Successfully downloaded all kic artifacts
	I1019 13:16:38.804458  490179 start.go:360] acquireMachinesLock for default-k8s-diff-port-455348: {Name:mk240c57fae30746abb498299da3308a8a0334da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:16:38.804560  490179 start.go:364] duration metric: took 86.639µs to acquireMachinesLock for "default-k8s-diff-port-455348"
	I1019 13:16:38.804586  490179 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-455348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-455348 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:16:38.804656  490179 start.go:125] createHost starting for "" (driver="docker")
	W1019 13:16:38.152491  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	W1019 13:16:40.652068  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	I1019 13:16:38.808116  490179 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 13:16:38.808347  490179 start.go:159] libmachine.API.Create for "default-k8s-diff-port-455348" (driver="docker")
	I1019 13:16:38.808396  490179 client.go:168] LocalClient.Create starting
	I1019 13:16:38.808465  490179 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem
	I1019 13:16:38.808506  490179 main.go:141] libmachine: Decoding PEM data...
	I1019 13:16:38.808527  490179 main.go:141] libmachine: Parsing certificate...
	I1019 13:16:38.808585  490179 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem
	I1019 13:16:38.808608  490179 main.go:141] libmachine: Decoding PEM data...
	I1019 13:16:38.808618  490179 main.go:141] libmachine: Parsing certificate...
	I1019 13:16:38.808984  490179 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-455348 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 13:16:38.825437  490179 cli_runner.go:211] docker network inspect default-k8s-diff-port-455348 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 13:16:38.825579  490179 network_create.go:284] running [docker network inspect default-k8s-diff-port-455348] to gather additional debugging logs...
	I1019 13:16:38.825608  490179 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-455348
	W1019 13:16:38.841382  490179 cli_runner.go:211] docker network inspect default-k8s-diff-port-455348 returned with exit code 1
	I1019 13:16:38.841413  490179 network_create.go:287] error running [docker network inspect default-k8s-diff-port-455348]: docker network inspect default-k8s-diff-port-455348: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-455348 not found
	I1019 13:16:38.841426  490179 network_create.go:289] output of [docker network inspect default-k8s-diff-port-455348]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-455348 not found
	
	** /stderr **
	I1019 13:16:38.841527  490179 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:16:38.857584  490179 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-319c97358c5c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2a:99:c3:44:12:51} reservation:<nil>}
	I1019 13:16:38.857974  490179 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5c09b33e0936 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:93:4b:f6:fd:1c} reservation:<nil>}
	I1019 13:16:38.858347  490179 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2c2bbaadd4a8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:8f:96:27:48:2c} reservation:<nil>}
	I1019 13:16:38.858818  490179 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a1e60}
	I1019 13:16:38.858841  490179 network_create.go:124] attempt to create docker network default-k8s-diff-port-455348 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1019 13:16:38.858902  490179 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-455348 default-k8s-diff-port-455348
	I1019 13:16:38.916260  490179 network_create.go:108] docker network default-k8s-diff-port-455348 192.168.76.0/24 created
	I1019 13:16:38.916300  490179 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-455348" container
	I1019 13:16:38.916395  490179 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 13:16:38.933102  490179 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-455348 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-455348 --label created_by.minikube.sigs.k8s.io=true
	I1019 13:16:38.951212  490179 oci.go:103] Successfully created a docker volume default-k8s-diff-port-455348
	I1019 13:16:38.951292  490179 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-455348-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-455348 --entrypoint /usr/bin/test -v default-k8s-diff-port-455348:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 13:16:39.495574  490179 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-455348
	I1019 13:16:39.495631  490179 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:16:39.495651  490179 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 13:16:39.495721  490179 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-455348:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1019 13:16:42.652421  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	W1019 13:16:44.652613  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	I1019 13:16:43.954645  490179 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-455348:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.458880357s)
	I1019 13:16:43.954676  490179 kic.go:203] duration metric: took 4.459021479s to extract preloaded images to volume ...
	W1019 13:16:43.954814  490179 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 13:16:43.954928  490179 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 13:16:44.019529  490179 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-455348 --name default-k8s-diff-port-455348 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-455348 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-455348 --network default-k8s-diff-port-455348 --ip 192.168.76.2 --volume default-k8s-diff-port-455348:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 13:16:44.330514  490179 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-455348 --format={{.State.Running}}
	I1019 13:16:44.357023  490179 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-455348 --format={{.State.Status}}
	I1019 13:16:44.384843  490179 cli_runner.go:164] Run: docker exec default-k8s-diff-port-455348 stat /var/lib/dpkg/alternatives/iptables
	I1019 13:16:44.439083  490179 oci.go:144] the created container "default-k8s-diff-port-455348" has a running status.
	I1019 13:16:44.439197  490179 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa...
	I1019 13:16:44.476996  490179 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 13:16:44.498175  490179 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-455348 --format={{.State.Status}}
	I1019 13:16:44.524131  490179 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 13:16:44.524154  490179 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-455348 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 13:16:44.576361  490179 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-455348 --format={{.State.Status}}
	I1019 13:16:44.594750  490179 machine.go:93] provisionDockerMachine start ...
	I1019 13:16:44.594842  490179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:16:44.621135  490179 main.go:141] libmachine: Using SSH client type: native
	I1019 13:16:44.621456  490179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1019 13:16:44.621472  490179 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 13:16:44.622196  490179 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60082->127.0.0.1:33443: read: connection reset by peer
	I1019 13:16:47.777335  490179 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-455348
	
	I1019 13:16:47.777357  490179 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-455348"
	I1019 13:16:47.777435  490179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:16:47.796334  490179 main.go:141] libmachine: Using SSH client type: native
	I1019 13:16:47.796653  490179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1019 13:16:47.796665  490179 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-455348 && echo "default-k8s-diff-port-455348" | sudo tee /etc/hostname
	I1019 13:16:47.955227  490179 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-455348
	
	I1019 13:16:47.955340  490179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:16:47.973276  490179 main.go:141] libmachine: Using SSH client type: native
	I1019 13:16:47.973580  490179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1019 13:16:47.973603  490179 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-455348' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-455348/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-455348' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 13:16:48.134321  490179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 13:16:48.134350  490179 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 13:16:48.134378  490179 ubuntu.go:190] setting up certificates
	I1019 13:16:48.134387  490179 provision.go:84] configureAuth start
	I1019 13:16:48.134450  490179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-455348
	I1019 13:16:48.156100  490179 provision.go:143] copyHostCerts
	I1019 13:16:48.156188  490179 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem, removing ...
	I1019 13:16:48.156204  490179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem
	I1019 13:16:48.156293  490179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 13:16:48.156417  490179 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem, removing ...
	I1019 13:16:48.156428  490179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem
	I1019 13:16:48.156464  490179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 13:16:48.156544  490179 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem, removing ...
	I1019 13:16:48.156554  490179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem
	I1019 13:16:48.156589  490179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 13:16:48.156662  490179 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-455348 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-455348 localhost minikube]
	W1019 13:16:47.152296  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	W1019 13:16:49.653416  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	I1019 13:16:48.716683  490179 provision.go:177] copyRemoteCerts
	I1019 13:16:48.716757  490179 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 13:16:48.716818  490179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:16:48.734238  490179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:16:48.837526  490179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 13:16:48.855378  490179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1019 13:16:48.873121  490179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 13:16:48.890747  490179 provision.go:87] duration metric: took 756.335198ms to configureAuth
	I1019 13:16:48.890774  490179 ubuntu.go:206] setting minikube options for container-runtime
	I1019 13:16:48.891002  490179 config.go:182] Loaded profile config "default-k8s-diff-port-455348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:16:48.891156  490179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:16:48.908375  490179 main.go:141] libmachine: Using SSH client type: native
	I1019 13:16:48.908689  490179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1019 13:16:48.908709  490179 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 13:16:49.220162  490179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 13:16:49.220184  490179 machine.go:96] duration metric: took 4.625409789s to provisionDockerMachine
	I1019 13:16:49.220195  490179 client.go:171] duration metric: took 10.411787353s to LocalClient.Create
	I1019 13:16:49.220209  490179 start.go:167] duration metric: took 10.4118633s to libmachine.API.Create "default-k8s-diff-port-455348"
	I1019 13:16:49.220257  490179 start.go:293] postStartSetup for "default-k8s-diff-port-455348" (driver="docker")
	I1019 13:16:49.220277  490179 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 13:16:49.220354  490179 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 13:16:49.220434  490179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:16:49.238958  490179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:16:49.341515  490179 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 13:16:49.344802  490179 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 13:16:49.344831  490179 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 13:16:49.344842  490179 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/addons for local assets ...
	I1019 13:16:49.344895  490179 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/files for local assets ...
	I1019 13:16:49.344975  490179 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem -> 2945182.pem in /etc/ssl/certs
	I1019 13:16:49.345076  490179 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 13:16:49.352248  490179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:16:49.369523  490179 start.go:296] duration metric: took 149.242083ms for postStartSetup
	I1019 13:16:49.369977  490179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-455348
	I1019 13:16:49.386803  490179 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/config.json ...
	I1019 13:16:49.387080  490179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 13:16:49.387128  490179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:16:49.403393  490179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:16:49.502879  490179 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 13:16:49.507479  490179 start.go:128] duration metric: took 10.702808904s to createHost
	I1019 13:16:49.507502  490179 start.go:83] releasing machines lock for "default-k8s-diff-port-455348", held for 10.702933828s
	I1019 13:16:49.507618  490179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-455348
	I1019 13:16:49.524507  490179 ssh_runner.go:195] Run: cat /version.json
	I1019 13:16:49.524537  490179 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 13:16:49.524564  490179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:16:49.524595  490179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:16:49.546375  490179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:16:49.551399  490179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:16:49.651034  490179 ssh_runner.go:195] Run: systemctl --version
	I1019 13:16:49.744556  490179 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 13:16:49.786107  490179 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 13:16:49.790607  490179 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 13:16:49.790713  490179 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 13:16:49.820966  490179 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 13:16:49.821039  490179 start.go:495] detecting cgroup driver to use...
	I1019 13:16:49.821094  490179 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 13:16:49.821184  490179 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 13:16:49.839164  490179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 13:16:49.852229  490179 docker.go:218] disabling cri-docker service (if available) ...
	I1019 13:16:49.852291  490179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 13:16:49.868961  490179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 13:16:49.888260  490179 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 13:16:50.014812  490179 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 13:16:50.144176  490179 docker.go:234] disabling docker service ...
	I1019 13:16:50.144243  490179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 13:16:50.169146  490179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 13:16:50.183829  490179 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 13:16:50.313386  490179 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 13:16:50.440793  490179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 13:16:50.454647  490179 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 13:16:50.470262  490179 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 13:16:50.470343  490179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:16:50.480197  490179 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 13:16:50.480341  490179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:16:50.490252  490179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:16:50.500021  490179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:16:50.509491  490179 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 13:16:50.519069  490179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:16:50.532111  490179 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:16:50.545839  490179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:16:50.555344  490179 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 13:16:50.563130  490179 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 13:16:50.570840  490179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:16:50.693553  490179 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 13:16:50.821522  490179 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 13:16:50.821643  490179 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 13:16:50.825867  490179 start.go:563] Will wait 60s for crictl version
	I1019 13:16:50.825968  490179 ssh_runner.go:195] Run: which crictl
	I1019 13:16:50.831105  490179 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 13:16:50.860238  490179 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 13:16:50.860383  490179 ssh_runner.go:195] Run: crio --version
	I1019 13:16:50.890529  490179 ssh_runner.go:195] Run: crio --version
	I1019 13:16:50.926695  490179 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 13:16:50.929535  490179 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-455348 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:16:50.945836  490179 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 13:16:50.949761  490179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:16:50.960498  490179 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-455348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-455348 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 13:16:50.960627  490179 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:16:50.960686  490179 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:16:50.999191  490179 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:16:50.999216  490179 crio.go:433] Images already preloaded, skipping extraction
	I1019 13:16:50.999279  490179 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:16:51.034861  490179 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:16:51.034883  490179 cache_images.go:85] Images are preloaded, skipping loading
	I1019 13:16:51.034891  490179 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1019 13:16:51.034983  490179 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-455348 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-455348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 13:16:51.035067  490179 ssh_runner.go:195] Run: crio config
	I1019 13:16:51.100903  490179 cni.go:84] Creating CNI manager for ""
	I1019 13:16:51.100976  490179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:16:51.101012  490179 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 13:16:51.101068  490179 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-455348 NodeName:default-k8s-diff-port-455348 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 13:16:51.101346  490179 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-455348"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 13:16:51.101481  490179 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 13:16:51.110274  490179 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 13:16:51.110356  490179 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 13:16:51.119059  490179 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1019 13:16:51.133316  490179 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 13:16:51.148800  490179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1019 13:16:51.166102  490179 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 13:16:51.174480  490179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:16:51.185744  490179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:16:51.298290  490179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:16:51.335459  490179 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348 for IP: 192.168.76.2
	I1019 13:16:51.335481  490179 certs.go:195] generating shared ca certs ...
	I1019 13:16:51.335497  490179 certs.go:227] acquiring lock for ca certs: {Name:mk8f2f1c683cf5104ef70f6f3d59bf8f6240d633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:16:51.335641  490179 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key
	I1019 13:16:51.335697  490179 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key
	I1019 13:16:51.335708  490179 certs.go:257] generating profile certs ...
	I1019 13:16:51.335768  490179 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.key
	I1019 13:16:51.335792  490179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.crt with IP's: []
	I1019 13:16:52.566642  490179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.crt ...
	I1019 13:16:52.566680  490179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.crt: {Name:mk8d33bf20eecc401009306477de20f274361d42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:16:52.566901  490179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.key ...
	I1019 13:16:52.566916  490179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.key: {Name:mkb873a72c7dd0cae753b252442331d54bff65a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:16:52.567022  490179 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/apiserver.key.223e319e
	I1019 13:16:52.567044  490179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/apiserver.crt.223e319e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1019 13:16:52.940002  490179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/apiserver.crt.223e319e ...
	I1019 13:16:52.940037  490179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/apiserver.crt.223e319e: {Name:mk4b40cc58af7e4d0f06c291289c9229a6b5dbdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:16:52.940264  490179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/apiserver.key.223e319e ...
	I1019 13:16:52.940280  490179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/apiserver.key.223e319e: {Name:mkb9f6d7120ec0b95711bce4f3ddaca833787cb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:16:52.940381  490179 certs.go:382] copying /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/apiserver.crt.223e319e -> /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/apiserver.crt
	I1019 13:16:52.940470  490179 certs.go:386] copying /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/apiserver.key.223e319e -> /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/apiserver.key
	I1019 13:16:52.940532  490179 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/proxy-client.key
	I1019 13:16:52.940552  490179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/proxy-client.crt with IP's: []
	I1019 13:16:53.873412  490179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/proxy-client.crt ...
	I1019 13:16:53.873485  490179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/proxy-client.crt: {Name:mk03ff59e99dfd0de7e6aba0e67d252f177fdcfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:16:53.873833  490179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/proxy-client.key ...
	I1019 13:16:53.873873  490179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/proxy-client.key: {Name:mkd7f0497bfbf085a2b495e8dd3b33c1555fcf7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:16:53.874127  490179 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem (1338 bytes)
	W1019 13:16:53.874198  490179 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518_empty.pem, impossibly tiny 0 bytes
	I1019 13:16:53.874224  490179 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 13:16:53.874287  490179 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem (1082 bytes)
	I1019 13:16:53.874346  490179 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem (1123 bytes)
	I1019 13:16:53.874394  490179 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem (1679 bytes)
	I1019 13:16:53.874474  490179 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:16:53.875122  490179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 13:16:53.901354  490179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 13:16:53.929688  490179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 13:16:53.962304  490179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 13:16:53.985278  490179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1019 13:16:54.018602  490179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 13:16:54.046233  490179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 13:16:54.067350  490179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 13:16:54.088571  490179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 13:16:54.108078  490179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem --> /usr/share/ca-certificates/294518.pem (1338 bytes)
	I1019 13:16:54.125823  490179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /usr/share/ca-certificates/2945182.pem (1708 bytes)
	I1019 13:16:54.144955  490179 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 13:16:54.158191  490179 ssh_runner.go:195] Run: openssl version
	I1019 13:16:54.164518  490179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2945182.pem && ln -fs /usr/share/ca-certificates/2945182.pem /etc/ssl/certs/2945182.pem"
	I1019 13:16:54.173540  490179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2945182.pem
	I1019 13:16:54.178122  490179 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:20 /usr/share/ca-certificates/2945182.pem
	I1019 13:16:54.178187  490179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2945182.pem
	I1019 13:16:54.220428  490179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2945182.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 13:16:54.228747  490179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 13:16:54.238036  490179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:16:54.242025  490179 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:16:54.242091  490179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:16:54.285620  490179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 13:16:54.294196  490179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294518.pem && ln -fs /usr/share/ca-certificates/294518.pem /etc/ssl/certs/294518.pem"
	I1019 13:16:54.302382  490179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294518.pem
	I1019 13:16:54.306611  490179 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:20 /usr/share/ca-certificates/294518.pem
	I1019 13:16:54.306674  490179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294518.pem
	I1019 13:16:54.349236  490179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294518.pem /etc/ssl/certs/51391683.0"
	I1019 13:16:54.357901  490179 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 13:16:54.361996  490179 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 13:16:54.362100  490179 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-455348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-455348 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:16:54.362185  490179 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 13:16:54.362243  490179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 13:16:54.390170  490179 cri.go:89] found id: ""
	I1019 13:16:54.390279  490179 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 13:16:54.398638  490179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 13:16:54.406486  490179 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1019 13:16:54.406547  490179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 13:16:54.415188  490179 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 13:16:54.415207  490179 kubeadm.go:157] found existing configuration files:
	
	I1019 13:16:54.415260  490179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1019 13:16:54.423101  490179 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 13:16:54.423198  490179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 13:16:54.430870  490179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1019 13:16:54.438717  490179 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 13:16:54.438804  490179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 13:16:54.446544  490179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1019 13:16:54.454423  490179 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 13:16:54.454490  490179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 13:16:54.461952  490179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1019 13:16:54.469667  490179 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 13:16:54.469822  490179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 13:16:54.477900  490179 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 13:16:54.529352  490179 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1019 13:16:54.529561  490179 kubeadm.go:318] [preflight] Running pre-flight checks
	I1019 13:16:54.566526  490179 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1019 13:16:54.566637  490179 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 13:16:54.566706  490179 kubeadm.go:318] OS: Linux
	I1019 13:16:54.566781  490179 kubeadm.go:318] CGROUPS_CPU: enabled
	I1019 13:16:54.566837  490179 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1019 13:16:54.566892  490179 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1019 13:16:54.566947  490179 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1019 13:16:54.567001  490179 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1019 13:16:54.567053  490179 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1019 13:16:54.567104  490179 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1019 13:16:54.567159  490179 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1019 13:16:54.567218  490179 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1019 13:16:54.646859  490179 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 13:16:54.646977  490179 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 13:16:54.647177  490179 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 13:16:54.654944  490179 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1019 13:16:52.152855  485611 node_ready.go:57] node "embed-certs-834340" has "Ready":"False" status (will retry)
	I1019 13:16:53.652429  485611 node_ready.go:49] node "embed-certs-834340" is "Ready"
	I1019 13:16:53.652458  485611 node_ready.go:38] duration metric: took 40.003768071s for node "embed-certs-834340" to be "Ready" ...
	I1019 13:16:53.652473  485611 api_server.go:52] waiting for apiserver process to appear ...
	I1019 13:16:53.652531  485611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 13:16:53.707311  485611 api_server.go:72] duration metric: took 41.211794989s to wait for apiserver process to appear ...
	I1019 13:16:53.707334  485611 api_server.go:88] waiting for apiserver healthz status ...
	I1019 13:16:53.707352  485611 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 13:16:53.720121  485611 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1019 13:16:53.721566  485611 api_server.go:141] control plane version: v1.34.1
	I1019 13:16:53.721589  485611 api_server.go:131] duration metric: took 14.248816ms to wait for apiserver health ...
	I1019 13:16:53.721599  485611 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 13:16:53.726207  485611 system_pods.go:59] 8 kube-system pods found
	I1019 13:16:53.726305  485611 system_pods.go:61] "coredns-66bc5c9577-sgj8p" [ba81b6cb-a1c6-4d8f-9fd8-33c80b505be0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:16:53.726327  485611 system_pods.go:61] "etcd-embed-certs-834340" [53d9ce2f-a823-4b6d-b629-d672a8986024] Running
	I1019 13:16:53.726370  485611 system_pods.go:61] "kindnet-cbzm8" [9919aa81-5732-4d82-834f-eecd379ff767] Running
	I1019 13:16:53.726396  485611 system_pods.go:61] "kube-apiserver-embed-certs-834340" [ad0eb2ab-9de5-4a93-bb62-974384a4312f] Running
	I1019 13:16:53.726420  485611 system_pods.go:61] "kube-controller-manager-embed-certs-834340" [8a517576-4b00-4537-a6fa-192a9d0839ad] Running
	I1019 13:16:53.726456  485611 system_pods.go:61] "kube-proxy-2skj7" [7f512885-261d-45a8-9870-c7f00e96dc43] Running
	I1019 13:16:53.726463  485611 system_pods.go:61] "kube-scheduler-embed-certs-834340" [1b45002b-6861-45b0-928d-da16cc52d739] Running
	I1019 13:16:53.726471  485611 system_pods.go:61] "storage-provisioner" [02bc630b-7545-484e-97c0-2918b40a150e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 13:16:53.726478  485611 system_pods.go:74] duration metric: took 4.873485ms to wait for pod list to return data ...
	I1019 13:16:53.726487  485611 default_sa.go:34] waiting for default service account to be created ...
	I1019 13:16:53.729879  485611 default_sa.go:45] found service account: "default"
	I1019 13:16:53.729901  485611 default_sa.go:55] duration metric: took 3.408005ms for default service account to be created ...
	I1019 13:16:53.729911  485611 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 13:16:53.733732  485611 system_pods.go:86] 8 kube-system pods found
	I1019 13:16:53.733763  485611 system_pods.go:89] "coredns-66bc5c9577-sgj8p" [ba81b6cb-a1c6-4d8f-9fd8-33c80b505be0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:16:53.733771  485611 system_pods.go:89] "etcd-embed-certs-834340" [53d9ce2f-a823-4b6d-b629-d672a8986024] Running
	I1019 13:16:53.733777  485611 system_pods.go:89] "kindnet-cbzm8" [9919aa81-5732-4d82-834f-eecd379ff767] Running
	I1019 13:16:53.733782  485611 system_pods.go:89] "kube-apiserver-embed-certs-834340" [ad0eb2ab-9de5-4a93-bb62-974384a4312f] Running
	I1019 13:16:53.733787  485611 system_pods.go:89] "kube-controller-manager-embed-certs-834340" [8a517576-4b00-4537-a6fa-192a9d0839ad] Running
	I1019 13:16:53.733790  485611 system_pods.go:89] "kube-proxy-2skj7" [7f512885-261d-45a8-9870-c7f00e96dc43] Running
	I1019 13:16:53.733794  485611 system_pods.go:89] "kube-scheduler-embed-certs-834340" [1b45002b-6861-45b0-928d-da16cc52d739] Running
	I1019 13:16:53.733800  485611 system_pods.go:89] "storage-provisioner" [02bc630b-7545-484e-97c0-2918b40a150e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 13:16:53.733831  485611 retry.go:31] will retry after 246.630954ms: missing components: kube-dns
	I1019 13:16:53.987497  485611 system_pods.go:86] 8 kube-system pods found
	I1019 13:16:53.987527  485611 system_pods.go:89] "coredns-66bc5c9577-sgj8p" [ba81b6cb-a1c6-4d8f-9fd8-33c80b505be0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:16:53.987535  485611 system_pods.go:89] "etcd-embed-certs-834340" [53d9ce2f-a823-4b6d-b629-d672a8986024] Running
	I1019 13:16:53.987540  485611 system_pods.go:89] "kindnet-cbzm8" [9919aa81-5732-4d82-834f-eecd379ff767] Running
	I1019 13:16:53.987545  485611 system_pods.go:89] "kube-apiserver-embed-certs-834340" [ad0eb2ab-9de5-4a93-bb62-974384a4312f] Running
	I1019 13:16:53.987551  485611 system_pods.go:89] "kube-controller-manager-embed-certs-834340" [8a517576-4b00-4537-a6fa-192a9d0839ad] Running
	I1019 13:16:53.987555  485611 system_pods.go:89] "kube-proxy-2skj7" [7f512885-261d-45a8-9870-c7f00e96dc43] Running
	I1019 13:16:53.987559  485611 system_pods.go:89] "kube-scheduler-embed-certs-834340" [1b45002b-6861-45b0-928d-da16cc52d739] Running
	I1019 13:16:53.987563  485611 system_pods.go:89] "storage-provisioner" [02bc630b-7545-484e-97c0-2918b40a150e] Running
	I1019 13:16:53.987570  485611 system_pods.go:126] duration metric: took 257.653503ms to wait for k8s-apps to be running ...
	I1019 13:16:53.987578  485611 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 13:16:53.987627  485611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:16:54.005395  485611 system_svc.go:56] duration metric: took 17.807592ms WaitForService to wait for kubelet
	I1019 13:16:54.005431  485611 kubeadm.go:586] duration metric: took 41.509919013s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:16:54.005452  485611 node_conditions.go:102] verifying NodePressure condition ...
	I1019 13:16:54.010917  485611 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 13:16:54.011002  485611 node_conditions.go:123] node cpu capacity is 2
	I1019 13:16:54.011031  485611 node_conditions.go:105] duration metric: took 5.573073ms to run NodePressure ...
	I1019 13:16:54.011075  485611 start.go:241] waiting for startup goroutines ...
	I1019 13:16:54.011101  485611 start.go:246] waiting for cluster config update ...
	I1019 13:16:54.011127  485611 start.go:255] writing updated cluster config ...
	I1019 13:16:54.011492  485611 ssh_runner.go:195] Run: rm -f paused
	I1019 13:16:54.016138  485611 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:16:54.021037  485611 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sgj8p" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:55.030726  485611 pod_ready.go:94] pod "coredns-66bc5c9577-sgj8p" is "Ready"
	I1019 13:16:55.030752  485611 pod_ready.go:86] duration metric: took 1.009680233s for pod "coredns-66bc5c9577-sgj8p" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:55.034267  485611 pod_ready.go:83] waiting for pod "etcd-embed-certs-834340" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:55.040870  485611 pod_ready.go:94] pod "etcd-embed-certs-834340" is "Ready"
	I1019 13:16:55.040896  485611 pod_ready.go:86] duration metric: took 6.60379ms for pod "etcd-embed-certs-834340" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:55.044125  485611 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-834340" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:55.050602  485611 pod_ready.go:94] pod "kube-apiserver-embed-certs-834340" is "Ready"
	I1019 13:16:55.050628  485611 pod_ready.go:86] duration metric: took 6.477995ms for pod "kube-apiserver-embed-certs-834340" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:55.053766  485611 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-834340" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:55.226502  485611 pod_ready.go:94] pod "kube-controller-manager-embed-certs-834340" is "Ready"
	I1019 13:16:55.226586  485611 pod_ready.go:86] duration metric: took 172.744393ms for pod "kube-controller-manager-embed-certs-834340" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:55.433288  485611 pod_ready.go:83] waiting for pod "kube-proxy-2skj7" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:55.826610  485611 pod_ready.go:94] pod "kube-proxy-2skj7" is "Ready"
	I1019 13:16:55.826687  485611 pod_ready.go:86] duration metric: took 393.311265ms for pod "kube-proxy-2skj7" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:56.027124  485611 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-834340" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:56.426778  485611 pod_ready.go:94] pod "kube-scheduler-embed-certs-834340" is "Ready"
	I1019 13:16:56.426816  485611 pod_ready.go:86] duration metric: took 399.662407ms for pod "kube-scheduler-embed-certs-834340" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:16:56.426830  485611 pod_ready.go:40] duration metric: took 2.410659356s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:16:56.503389  485611 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 13:16:56.506756  485611 out.go:179] * Done! kubectl is now configured to use "embed-certs-834340" cluster and "default" namespace by default
	I1019 13:16:54.660660  490179 out.go:252]   - Generating certificates and keys ...
	I1019 13:16:54.660771  490179 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1019 13:16:54.660852  490179 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1019 13:16:54.863301  490179 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 13:16:55.255367  490179 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1019 13:16:55.398037  490179 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1019 13:16:55.942199  490179 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1019 13:16:56.510698  490179 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1019 13:16:56.511328  490179 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-455348 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1019 13:16:56.639473  490179 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1019 13:16:56.640192  490179 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-455348 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1019 13:16:56.826247  490179 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 13:16:57.564511  490179 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 13:16:58.077238  490179 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1019 13:16:58.077547  490179 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 13:16:58.737366  490179 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 13:16:59.321880  490179 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 13:16:59.662630  490179 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 13:17:00.179434  490179 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 13:17:00.357199  490179 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 13:17:00.358534  490179 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 13:17:00.362270  490179 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 13:17:00.368923  490179 out.go:252]   - Booting up control plane ...
	I1019 13:17:00.369096  490179 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 13:17:00.369198  490179 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 13:17:00.370788  490179 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 13:17:00.423259  490179 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 13:17:00.423460  490179 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 13:17:00.430789  490179 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 13:17:00.431280  490179 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 13:17:00.431385  490179 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1019 13:17:00.592348  490179 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 13:17:00.592563  490179 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 13:17:01.593206  490179 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001332583s
	I1019 13:17:01.598536  490179 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 13:17:01.598698  490179 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1019 13:17:01.598847  490179 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 13:17:01.598982  490179 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Oct 19 13:16:53 embed-certs-834340 crio[838]: time="2025-10-19T13:16:53.766395572Z" level=info msg="Created container ee6f8abd42ecf6c35851aa8508a267efb7028ff4a41de7113e64c8a46c18d282: kube-system/coredns-66bc5c9577-sgj8p/coredns" id=be3a193e-51c4-4da4-b55e-d4b9ec3c00d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:16:53 embed-certs-834340 crio[838]: time="2025-10-19T13:16:53.76967473Z" level=info msg="Starting container: ee6f8abd42ecf6c35851aa8508a267efb7028ff4a41de7113e64c8a46c18d282" id=9ce970ee-625e-41ec-a130-e3ac50f61ed8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:16:53 embed-certs-834340 crio[838]: time="2025-10-19T13:16:53.775680744Z" level=info msg="Started container" PID=1724 containerID=ee6f8abd42ecf6c35851aa8508a267efb7028ff4a41de7113e64c8a46c18d282 description=kube-system/coredns-66bc5c9577-sgj8p/coredns id=9ce970ee-625e-41ec-a130-e3ac50f61ed8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a18451941d5e5be308b61567a6dc777393e1950f9bab82986df9cdbe1dd8b787
	Oct 19 13:16:57 embed-certs-834340 crio[838]: time="2025-10-19T13:16:57.102522521Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9349f672-8ff8-499a-8124-04d3eec5904a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:16:57 embed-certs-834340 crio[838]: time="2025-10-19T13:16:57.10260381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:16:57 embed-certs-834340 crio[838]: time="2025-10-19T13:16:57.110465972Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:86f9b11e88f307da21b798bee83f8bfb592eeae9bd228d8fbd9d4d334bc198ec UID:6ad38544-fd49-4f9c-8c24-24f230946955 NetNS:/var/run/netns/f8a57a86-8791-44e5-8402-a6f88ed04636 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000bc6eb0}] Aliases:map[]}"
	Oct 19 13:16:57 embed-certs-834340 crio[838]: time="2025-10-19T13:16:57.110505217Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 19 13:16:57 embed-certs-834340 crio[838]: time="2025-10-19T13:16:57.129131972Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:86f9b11e88f307da21b798bee83f8bfb592eeae9bd228d8fbd9d4d334bc198ec UID:6ad38544-fd49-4f9c-8c24-24f230946955 NetNS:/var/run/netns/f8a57a86-8791-44e5-8402-a6f88ed04636 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000bc6eb0}] Aliases:map[]}"
	Oct 19 13:16:57 embed-certs-834340 crio[838]: time="2025-10-19T13:16:57.138015456Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 19 13:16:57 embed-certs-834340 crio[838]: time="2025-10-19T13:16:57.141209616Z" level=info msg="Ran pod sandbox 86f9b11e88f307da21b798bee83f8bfb592eeae9bd228d8fbd9d4d334bc198ec with infra container: default/busybox/POD" id=9349f672-8ff8-499a-8124-04d3eec5904a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:16:57 embed-certs-834340 crio[838]: time="2025-10-19T13:16:57.143265341Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b71a1f42-cde8-4989-bd48-77a6e23a28ab name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:16:57 embed-certs-834340 crio[838]: time="2025-10-19T13:16:57.143581923Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b71a1f42-cde8-4989-bd48-77a6e23a28ab name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:16:57 embed-certs-834340 crio[838]: time="2025-10-19T13:16:57.143747784Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b71a1f42-cde8-4989-bd48-77a6e23a28ab name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:16:57 embed-certs-834340 crio[838]: time="2025-10-19T13:16:57.147518083Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6f3db7c4-c9bc-4a64-92eb-57868f47ceef name=/runtime.v1.ImageService/PullImage
	Oct 19 13:16:57 embed-certs-834340 crio[838]: time="2025-10-19T13:16:57.152928963Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 13:16:59 embed-certs-834340 crio[838]: time="2025-10-19T13:16:59.415829594Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=6f3db7c4-c9bc-4a64-92eb-57868f47ceef name=/runtime.v1.ImageService/PullImage
	Oct 19 13:16:59 embed-certs-834340 crio[838]: time="2025-10-19T13:16:59.417048843Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=24da6c8a-ba5f-4e10-99f1-0e7b1bdf02d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:16:59 embed-certs-834340 crio[838]: time="2025-10-19T13:16:59.420972482Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b8b51b40-5868-4f25-87d7-19d383ba0bb5 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:16:59 embed-certs-834340 crio[838]: time="2025-10-19T13:16:59.428237155Z" level=info msg="Creating container: default/busybox/busybox" id=780f6020-10df-4f1a-ba33-1ab354da4bda name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:16:59 embed-certs-834340 crio[838]: time="2025-10-19T13:16:59.429437023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:16:59 embed-certs-834340 crio[838]: time="2025-10-19T13:16:59.438123639Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:16:59 embed-certs-834340 crio[838]: time="2025-10-19T13:16:59.439032443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:16:59 embed-certs-834340 crio[838]: time="2025-10-19T13:16:59.462695656Z" level=info msg="Created container 98abfeebb769138be7b44910a37b0464353cb5c32ad735d9d523943dfd013a7c: default/busybox/busybox" id=780f6020-10df-4f1a-ba33-1ab354da4bda name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:16:59 embed-certs-834340 crio[838]: time="2025-10-19T13:16:59.466416035Z" level=info msg="Starting container: 98abfeebb769138be7b44910a37b0464353cb5c32ad735d9d523943dfd013a7c" id=b7f96a33-789e-4179-b831-6afb1de05369 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:16:59 embed-certs-834340 crio[838]: time="2025-10-19T13:16:59.471530508Z" level=info msg="Started container" PID=1784 containerID=98abfeebb769138be7b44910a37b0464353cb5c32ad735d9d523943dfd013a7c description=default/busybox/busybox id=b7f96a33-789e-4179-b831-6afb1de05369 name=/runtime.v1.RuntimeService/StartContainer sandboxID=86f9b11e88f307da21b798bee83f8bfb592eeae9bd228d8fbd9d4d334bc198ec
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	98abfeebb7691       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   86f9b11e88f30       busybox                                      default
	ee6f8abd42ecf       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   a18451941d5e5       coredns-66bc5c9577-sgj8p                     kube-system
	ee93b440d88ec       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   4145e1e289d40       storage-provisioner                          kube-system
	1da098e211e67       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   a390a96da48ee       kindnet-cbzm8                                kube-system
	2270aa6cc9003       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   bbc6e043c68c4       kube-proxy-2skj7                             kube-system
	df9eceb82e75a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   40aac60b1d39d       kube-apiserver-embed-certs-834340            kube-system
	1da20ab6893af       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   80b9b29358cbc       kube-scheduler-embed-certs-834340            kube-system
	3e2e1fdfc7bf7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   2b7a2ac279685       etcd-embed-certs-834340                      kube-system
	ec209c3f5a08f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   ce45cf800c6ca       kube-controller-manager-embed-certs-834340   kube-system
	
	
	==> coredns [ee6f8abd42ecf6c35851aa8508a267efb7028ff4a41de7113e64c8a46c18d282] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45427 - 48980 "HINFO IN 5578795988627936868.2371996369816677981. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027009317s
	
	
	==> describe nodes <==
	Name:               embed-certs-834340
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-834340
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=embed-certs-834340
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T13_16_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 13:16:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-834340
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 13:17:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 13:16:53 +0000   Sun, 19 Oct 2025 13:15:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 13:16:53 +0000   Sun, 19 Oct 2025 13:15:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 13:16:53 +0000   Sun, 19 Oct 2025 13:15:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 13:16:53 +0000   Sun, 19 Oct 2025 13:16:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-834340
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                89f6ba5e-d968-48de-b86a-37b91a3521e1
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-sgj8p                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-embed-certs-834340                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-cbzm8                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-834340             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-embed-certs-834340    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-2skj7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-834340             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   Starting                 69s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 69s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x8 over 69s)  kubelet          Node embed-certs-834340 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node embed-certs-834340 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 69s)  kubelet          Node embed-certs-834340 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-834340 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-834340 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-834340 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node embed-certs-834340 event: Registered Node embed-certs-834340 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-834340 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct19 12:53] overlayfs: idmapped layers are currently not supported
	[Oct19 12:54] overlayfs: idmapped layers are currently not supported
	[Oct19 12:56] overlayfs: idmapped layers are currently not supported
	[ +16.315179] overlayfs: idmapped layers are currently not supported
	[ +11.914063] overlayfs: idmapped layers are currently not supported
	[Oct19 12:57] overlayfs: idmapped layers are currently not supported
	[Oct19 12:58] overlayfs: idmapped layers are currently not supported
	[ +48.481184] overlayfs: idmapped layers are currently not supported
	[Oct19 12:59] overlayfs: idmapped layers are currently not supported
	[Oct19 13:00] overlayfs: idmapped layers are currently not supported
	[Oct19 13:01] overlayfs: idmapped layers are currently not supported
	[Oct19 13:04] overlayfs: idmapped layers are currently not supported
	[Oct19 13:05] overlayfs: idmapped layers are currently not supported
	[Oct19 13:06] overlayfs: idmapped layers are currently not supported
	[Oct19 13:08] overlayfs: idmapped layers are currently not supported
	[ +38.759554] overlayfs: idmapped layers are currently not supported
	[Oct19 13:10] overlayfs: idmapped layers are currently not supported
	[Oct19 13:11] overlayfs: idmapped layers are currently not supported
	[Oct19 13:12] overlayfs: idmapped layers are currently not supported
	[ +39.991818] overlayfs: idmapped layers are currently not supported
	[Oct19 13:13] overlayfs: idmapped layers are currently not supported
	[Oct19 13:14] overlayfs: idmapped layers are currently not supported
	[Oct19 13:15] overlayfs: idmapped layers are currently not supported
	[ +34.413925] overlayfs: idmapped layers are currently not supported
	[Oct19 13:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3e2e1fdfc7bf75287fa9b7ef6211c9c931a8fcb5afd31330372db0b1687241b4] <==
	{"level":"warn","ts":"2025-10-19T13:16:01.786168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:01.807773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:01.829476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:01.852802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:01.878663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:01.917660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:01.919120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:01.936941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:01.951918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:01.976009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:01.995215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:02.010137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:02.032505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:02.045441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:02.077098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:02.089401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:02.108844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:02.125150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:02.151926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:02.182929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:02.199384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:02.246343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:02.268409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:02.292522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:16:02.420307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42650","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:17:07 up  2:59,  0 user,  load average: 3.92, 3.38, 2.82
	Linux embed-certs-834340 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1da098e211e67e60d0efbe1fe9016f986254371c5016859bdb9b730eb168bd78] <==
	I1019 13:16:12.690393       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:16:12.691003       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 13:16:12.691165       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:16:12.691178       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:16:12.691192       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:16:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:16:12.908143       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:16:12.908197       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:16:12.908216       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:16:12.909884       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 13:16:42.908877       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 13:16:42.908880       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 13:16:42.910174       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 13:16:42.910174       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1019 13:16:44.308679       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 13:16:44.308711       1 metrics.go:72] Registering metrics
	I1019 13:16:44.308778       1 controller.go:711] "Syncing nftables rules"
	I1019 13:16:52.909765       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 13:16:52.909827       1 main.go:301] handling current node
	I1019 13:17:02.907967       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 13:17:02.908041       1 main.go:301] handling current node
	
	
	==> kube-apiserver [df9eceb82e75a862f772004f2bdbfefe30ca03ce486474f27d63bcd4f08a32ee] <==
	I1019 13:16:03.501237       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1019 13:16:03.509132       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:16:03.509240       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 13:16:03.514178       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1019 13:16:03.550389       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1019 13:16:03.582696       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 13:16:03.696812       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 13:16:04.188772       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 13:16:04.197247       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 13:16:04.197271       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 13:16:05.239688       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 13:16:05.299830       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 13:16:05.402667       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 13:16:05.412169       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1019 13:16:05.413348       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 13:16:05.418695       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 13:16:06.339266       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 13:16:06.569367       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 13:16:06.601621       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 13:16:06.629362       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 13:16:11.399062       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1019 13:16:12.135352       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:16:12.141628       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:16:12.442658       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1019 13:17:05.922660       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:38626: use of closed network connection
	
	
	==> kube-controller-manager [ec209c3f5a08fe0f69efac21a8db78d2a7a17be209baf7705e1e916c6b0b8ebd] <==
	I1019 13:16:11.345933       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 13:16:11.345963       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 13:16:11.349104       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 13:16:11.359388       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 13:16:11.363544       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:16:11.373786       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:16:11.375994       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 13:16:11.377841       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 13:16:11.378511       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 13:16:11.378542       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 13:16:11.378586       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 13:16:11.378651       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 13:16:11.378688       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 13:16:11.380024       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 13:16:11.380128       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 13:16:11.384032       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 13:16:11.385433       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 13:16:11.385513       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 13:16:11.385540       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 13:16:11.385550       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 13:16:11.385570       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 13:16:11.385953       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 13:16:11.389718       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 13:16:11.407119       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-834340" podCIDRs=["10.244.0.0/24"]
	I1019 13:16:56.333099       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2270aa6cc90039fca1a0c6d10abbcad14c5b5c0a4afe1d1aa5dd19129920e5c2] <==
	I1019 13:16:12.974255       1 server_linux.go:53] "Using iptables proxy"
	I1019 13:16:13.080316       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 13:16:13.181435       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 13:16:13.181480       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 13:16:13.181563       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 13:16:13.215726       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:16:13.215775       1 server_linux.go:132] "Using iptables Proxier"
	I1019 13:16:13.226385       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 13:16:13.226681       1 server.go:527] "Version info" version="v1.34.1"
	I1019 13:16:13.226694       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:16:13.228127       1 config.go:200] "Starting service config controller"
	I1019 13:16:13.228136       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 13:16:13.228151       1 config.go:106] "Starting endpoint slice config controller"
	I1019 13:16:13.228155       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 13:16:13.228165       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 13:16:13.228169       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 13:16:13.228766       1 config.go:309] "Starting node config controller"
	I1019 13:16:13.228773       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 13:16:13.228778       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 13:16:13.331589       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 13:16:13.331637       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 13:16:13.331671       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1da20ab6893afe8812334137b50cae8393e05ab0b2c270c47017864aa647c9b7] <==
	I1019 13:16:04.187310       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:16:04.190745       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 13:16:04.190857       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:16:04.197353       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:16:04.192770       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1019 13:16:04.199205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 13:16:04.213230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1019 13:16:04.213806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 13:16:04.213906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 13:16:04.213981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 13:16:04.214072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 13:16:04.215717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 13:16:04.215765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 13:16:04.215815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 13:16:04.215848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 13:16:04.215880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 13:16:04.215912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 13:16:04.215944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 13:16:04.215996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 13:16:04.216029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 13:16:04.216088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 13:16:04.216126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 13:16:04.216159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 13:16:04.216222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1019 13:16:05.399143       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 13:16:11 embed-certs-834340 kubelet[1296]: I1019 13:16:11.550661    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4gtm\" (UniqueName: \"kubernetes.io/projected/9919aa81-5732-4d82-834f-eecd379ff767-kube-api-access-v4gtm\") pod \"kindnet-cbzm8\" (UID: \"9919aa81-5732-4d82-834f-eecd379ff767\") " pod="kube-system/kindnet-cbzm8"
	Oct 19 13:16:11 embed-certs-834340 kubelet[1296]: I1019 13:16:11.550680    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f512885-261d-45a8-9870-c7f00e96dc43-xtables-lock\") pod \"kube-proxy-2skj7\" (UID: \"7f512885-261d-45a8-9870-c7f00e96dc43\") " pod="kube-system/kube-proxy-2skj7"
	Oct 19 13:16:11 embed-certs-834340 kubelet[1296]: I1019 13:16:11.550703    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn5l2\" (UniqueName: \"kubernetes.io/projected/7f512885-261d-45a8-9870-c7f00e96dc43-kube-api-access-pn5l2\") pod \"kube-proxy-2skj7\" (UID: \"7f512885-261d-45a8-9870-c7f00e96dc43\") " pod="kube-system/kube-proxy-2skj7"
	Oct 19 13:16:11 embed-certs-834340 kubelet[1296]: I1019 13:16:11.550722    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7f512885-261d-45a8-9870-c7f00e96dc43-kube-proxy\") pod \"kube-proxy-2skj7\" (UID: \"7f512885-261d-45a8-9870-c7f00e96dc43\") " pod="kube-system/kube-proxy-2skj7"
	Oct 19 13:16:11 embed-certs-834340 kubelet[1296]: E1019 13:16:11.662018    1296 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 19 13:16:11 embed-certs-834340 kubelet[1296]: E1019 13:16:11.662234    1296 projected.go:196] Error preparing data for projected volume kube-api-access-pn5l2 for pod kube-system/kube-proxy-2skj7: configmap "kube-root-ca.crt" not found
	Oct 19 13:16:11 embed-certs-834340 kubelet[1296]: E1019 13:16:11.662398    1296 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f512885-261d-45a8-9870-c7f00e96dc43-kube-api-access-pn5l2 podName:7f512885-261d-45a8-9870-c7f00e96dc43 nodeName:}" failed. No retries permitted until 2025-10-19 13:16:12.162355775 +0000 UTC m=+5.679685272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pn5l2" (UniqueName: "kubernetes.io/projected/7f512885-261d-45a8-9870-c7f00e96dc43-kube-api-access-pn5l2") pod "kube-proxy-2skj7" (UID: "7f512885-261d-45a8-9870-c7f00e96dc43") : configmap "kube-root-ca.crt" not found
	Oct 19 13:16:11 embed-certs-834340 kubelet[1296]: E1019 13:16:11.666915    1296 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 19 13:16:11 embed-certs-834340 kubelet[1296]: E1019 13:16:11.666958    1296 projected.go:196] Error preparing data for projected volume kube-api-access-v4gtm for pod kube-system/kindnet-cbzm8: configmap "kube-root-ca.crt" not found
	Oct 19 13:16:11 embed-certs-834340 kubelet[1296]: E1019 13:16:11.667026    1296 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9919aa81-5732-4d82-834f-eecd379ff767-kube-api-access-v4gtm podName:9919aa81-5732-4d82-834f-eecd379ff767 nodeName:}" failed. No retries permitted until 2025-10-19 13:16:12.167004273 +0000 UTC m=+5.684333778 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v4gtm" (UniqueName: "kubernetes.io/projected/9919aa81-5732-4d82-834f-eecd379ff767-kube-api-access-v4gtm") pod "kindnet-cbzm8" (UID: "9919aa81-5732-4d82-834f-eecd379ff767") : configmap "kube-root-ca.crt" not found
	Oct 19 13:16:12 embed-certs-834340 kubelet[1296]: I1019 13:16:12.257406    1296 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 19 13:16:12 embed-certs-834340 kubelet[1296]: W1019 13:16:12.392375    1296 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/crio-bbc6e043c68c48b45663b418f01569479c169aa5bfff4d496cea595ac214ff01 WatchSource:0}: Error finding container bbc6e043c68c48b45663b418f01569479c169aa5bfff4d496cea595ac214ff01: Status 404 returned error can't find the container with id bbc6e043c68c48b45663b418f01569479c169aa5bfff4d496cea595ac214ff01
	Oct 19 13:16:12 embed-certs-834340 kubelet[1296]: W1019 13:16:12.438229    1296 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/crio-a390a96da48ee161ee0bdb8354bfb831bf11986c9370a04ff0e5e8eeea88a543 WatchSource:0}: Error finding container a390a96da48ee161ee0bdb8354bfb831bf11986c9370a04ff0e5e8eeea88a543: Status 404 returned error can't find the container with id a390a96da48ee161ee0bdb8354bfb831bf11986c9370a04ff0e5e8eeea88a543
	Oct 19 13:16:12 embed-certs-834340 kubelet[1296]: I1019 13:16:12.775924    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cbzm8" podStartSLOduration=1.775905437 podStartE2EDuration="1.775905437s" podCreationTimestamp="2025-10-19 13:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:16:12.775703982 +0000 UTC m=+6.293033479" watchObservedRunningTime="2025-10-19 13:16:12.775905437 +0000 UTC m=+6.293234934"
	Oct 19 13:16:12 embed-certs-834340 kubelet[1296]: I1019 13:16:12.899788    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2skj7" podStartSLOduration=1.8997726990000001 podStartE2EDuration="1.899772699s" podCreationTimestamp="2025-10-19 13:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:16:12.899435988 +0000 UTC m=+6.416765493" watchObservedRunningTime="2025-10-19 13:16:12.899772699 +0000 UTC m=+6.417102195"
	Oct 19 13:16:53 embed-certs-834340 kubelet[1296]: I1019 13:16:53.213357    1296 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 19 13:16:53 embed-certs-834340 kubelet[1296]: I1019 13:16:53.368798    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/02bc630b-7545-484e-97c0-2918b40a150e-tmp\") pod \"storage-provisioner\" (UID: \"02bc630b-7545-484e-97c0-2918b40a150e\") " pod="kube-system/storage-provisioner"
	Oct 19 13:16:53 embed-certs-834340 kubelet[1296]: I1019 13:16:53.369006    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwwrb\" (UniqueName: \"kubernetes.io/projected/ba81b6cb-a1c6-4d8f-9fd8-33c80b505be0-kube-api-access-zwwrb\") pod \"coredns-66bc5c9577-sgj8p\" (UID: \"ba81b6cb-a1c6-4d8f-9fd8-33c80b505be0\") " pod="kube-system/coredns-66bc5c9577-sgj8p"
	Oct 19 13:16:53 embed-certs-834340 kubelet[1296]: I1019 13:16:53.369123    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba81b6cb-a1c6-4d8f-9fd8-33c80b505be0-config-volume\") pod \"coredns-66bc5c9577-sgj8p\" (UID: \"ba81b6cb-a1c6-4d8f-9fd8-33c80b505be0\") " pod="kube-system/coredns-66bc5c9577-sgj8p"
	Oct 19 13:16:53 embed-certs-834340 kubelet[1296]: I1019 13:16:53.369216    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5zlw\" (UniqueName: \"kubernetes.io/projected/02bc630b-7545-484e-97c0-2918b40a150e-kube-api-access-f5zlw\") pod \"storage-provisioner\" (UID: \"02bc630b-7545-484e-97c0-2918b40a150e\") " pod="kube-system/storage-provisioner"
	Oct 19 13:16:53 embed-certs-834340 kubelet[1296]: W1019 13:16:53.620221    1296 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/crio-4145e1e289d40c7b355d9f0cf44622e1d3ff27d689f72c26e2411245359ea05f WatchSource:0}: Error finding container 4145e1e289d40c7b355d9f0cf44622e1d3ff27d689f72c26e2411245359ea05f: Status 404 returned error can't find the container with id 4145e1e289d40c7b355d9f0cf44622e1d3ff27d689f72c26e2411245359ea05f
	Oct 19 13:16:53 embed-certs-834340 kubelet[1296]: W1019 13:16:53.660391    1296 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/crio-a18451941d5e5be308b61567a6dc777393e1950f9bab82986df9cdbe1dd8b787 WatchSource:0}: Error finding container a18451941d5e5be308b61567a6dc777393e1950f9bab82986df9cdbe1dd8b787: Status 404 returned error can't find the container with id a18451941d5e5be308b61567a6dc777393e1950f9bab82986df9cdbe1dd8b787
	Oct 19 13:16:53 embed-certs-834340 kubelet[1296]: I1019 13:16:53.891036    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.890994607 podStartE2EDuration="40.890994607s" podCreationTimestamp="2025-10-19 13:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:16:53.867347821 +0000 UTC m=+47.384677326" watchObservedRunningTime="2025-10-19 13:16:53.890994607 +0000 UTC m=+47.408324112"
	Oct 19 13:16:54 embed-certs-834340 kubelet[1296]: I1019 13:16:54.863817    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sgj8p" podStartSLOduration=42.863796442 podStartE2EDuration="42.863796442s" podCreationTimestamp="2025-10-19 13:16:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:16:53.891875447 +0000 UTC m=+47.409204961" watchObservedRunningTime="2025-10-19 13:16:54.863796442 +0000 UTC m=+48.381125947"
	Oct 19 13:16:56 embed-certs-834340 kubelet[1296]: I1019 13:16:56.893362    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wjlq\" (UniqueName: \"kubernetes.io/projected/6ad38544-fd49-4f9c-8c24-24f230946955-kube-api-access-6wjlq\") pod \"busybox\" (UID: \"6ad38544-fd49-4f9c-8c24-24f230946955\") " pod="default/busybox"
	
	
	==> storage-provisioner [ee93b440d88ec402f265c377fce74f6940f941a16f42cbf5c692b17b038a118a] <==
	I1019 13:16:53.831985       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 13:16:53.886831       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 13:16:53.886881       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 13:16:53.944756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:53.954976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 13:16:53.955451       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 13:16:53.955669       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-834340_16aa5765-7ed9-4bad-b2cb-4930031f6e02!
	I1019 13:16:53.965759       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f1d54810-d394-48c2-ac3f-ee098575b9a6", APIVersion:"v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-834340_16aa5765-7ed9-4bad-b2cb-4930031f6e02 became leader
	W1019 13:16:53.970023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:53.979227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 13:16:54.057389       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-834340_16aa5765-7ed9-4bad-b2cb-4930031f6e02!
	W1019 13:16:55.982235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:55.987181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:57.991142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:16:57.997136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:17:00.000740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:17:00.028855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:17:02.032443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:17:02.038596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:17:04.042958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:17:04.049986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:17:06.055219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:17:06.060669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:17:08.065324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:17:08.073288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-834340 -n embed-certs-834340
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-834340 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-455348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-455348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (273.665876ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:18:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-455348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-455348 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-455348 describe deploy/metrics-server -n kube-system: exit status 1 (85.282325ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-455348 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-455348
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-455348:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7",
	        "Created": "2025-10-19T13:16:44.03379204Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 490574,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T13:16:44.099684648Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7/hostname",
	        "HostsPath": "/var/lib/docker/containers/6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7/hosts",
	        "LogPath": "/var/lib/docker/containers/6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7/6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7-json.log",
	        "Name": "/default-k8s-diff-port-455348",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-455348:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-455348",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7",
	                "LowerDir": "/var/lib/docker/overlay2/69c3312626a00a0a29de39da0ee3edd7eb25e0b33a22ef9214343606d7a497c2-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69c3312626a00a0a29de39da0ee3edd7eb25e0b33a22ef9214343606d7a497c2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69c3312626a00a0a29de39da0ee3edd7eb25e0b33a22ef9214343606d7a497c2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69c3312626a00a0a29de39da0ee3edd7eb25e0b33a22ef9214343606d7a497c2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-455348",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-455348/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-455348",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-455348",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-455348",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d39a9a94aa87db24ed0858a91f67b989b6c8f90c322fa38198872e05d454180b",
	            "SandboxKey": "/var/run/docker/netns/d39a9a94aa87",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-455348": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:e3:9b:f3:f9:4c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "feb5b6cb71ad4f1814069d9c1fecfa12355d747dd07980e633df65a307f6c04b",
	                    "EndpointID": "5cc7e4407449bfaa7b4423927a2927abf9ecf9585d533d3c453044479496f566",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-455348",
	                        "6519411d3b62"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-455348 -n default-k8s-diff-port-455348
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-455348 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-455348 logs -n 25: (1.22973762s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:14 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-842494 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │                     │
	│ stop    │ -p old-k8s-version-842494 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:13 UTC │ 19 Oct 25 13:14 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-842494 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:14 UTC │ 19 Oct 25 13:14 UTC │
	│ start   │ -p old-k8s-version-842494 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:14 UTC │ 19 Oct 25 13:15 UTC │
	│ addons  │ enable metrics-server -p no-preload-108149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ stop    │ -p no-preload-108149 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ addons  │ enable dashboard -p no-preload-108149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ start   │ -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:16 UTC │
	│ image   │ old-k8s-version-842494 image list --format=json                                                                                                                                                                                               │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ pause   │ -p old-k8s-version-842494 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ delete  │ -p old-k8s-version-842494                                                                                                                                                                                                                     │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ delete  │ -p old-k8s-version-842494                                                                                                                                                                                                                     │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ start   │ -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:16 UTC │
	│ image   │ no-preload-108149 image list --format=json                                                                                                                                                                                                    │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ pause   │ -p no-preload-108149 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │                     │
	│ delete  │ -p no-preload-108149                                                                                                                                                                                                                          │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ delete  │ -p no-preload-108149                                                                                                                                                                                                                          │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ delete  │ -p disable-driver-mounts-418719                                                                                                                                                                                                               │ disable-driver-mounts-418719 │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ start   │ -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-834340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │                     │
	│ stop    │ -p embed-certs-834340 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-834340 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:17 UTC │
	│ start   │ -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-455348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 13:17:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 13:17:21.372089  493482 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:17:21.372228  493482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:17:21.372239  493482 out.go:374] Setting ErrFile to fd 2...
	I1019 13:17:21.372245  493482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:17:21.372523  493482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:17:21.372950  493482 out.go:368] Setting JSON to false
	I1019 13:17:21.374048  493482 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10792,"bootTime":1760869050,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 13:17:21.374142  493482 start.go:141] virtualization:  
	I1019 13:17:21.379157  493482 out.go:179] * [embed-certs-834340] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 13:17:21.382237  493482 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:17:21.382377  493482 notify.go:220] Checking for updates...
	I1019 13:17:21.388173  493482 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:17:21.391178  493482 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:17:21.393971  493482 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 13:17:21.396938  493482 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 13:17:21.399771  493482 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:17:21.403198  493482 config.go:182] Loaded profile config "embed-certs-834340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:17:21.403831  493482 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:17:21.429770  493482 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 13:17:21.429907  493482 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:17:21.495764  493482 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:17:21.485839916 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:17:21.495957  493482 docker.go:318] overlay module found
	I1019 13:17:21.499261  493482 out.go:179] * Using the docker driver based on existing profile
	I1019 13:17:21.502281  493482 start.go:305] selected driver: docker
	I1019 13:17:21.502307  493482 start.go:925] validating driver "docker" against &{Name:embed-certs-834340 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-834340 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:17:21.502414  493482 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:17:21.503155  493482 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:17:21.574829  493482 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:17:21.565376633 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:17:21.575276  493482 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:17:21.575317  493482 cni.go:84] Creating CNI manager for ""
	I1019 13:17:21.575392  493482 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:17:21.575441  493482 start.go:349] cluster config:
	{Name:embed-certs-834340 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-834340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:17:21.580507  493482 out.go:179] * Starting "embed-certs-834340" primary control-plane node in "embed-certs-834340" cluster
	I1019 13:17:21.583534  493482 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 13:17:21.591129  493482 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 13:17:21.593962  493482 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:17:21.594025  493482 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 13:17:21.594036  493482 cache.go:58] Caching tarball of preloaded images
	I1019 13:17:21.594072  493482 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 13:17:21.594142  493482 preload.go:233] Found /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 13:17:21.594157  493482 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 13:17:21.594267  493482 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/config.json ...
	I1019 13:17:21.621984  493482 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 13:17:21.622010  493482 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 13:17:21.622025  493482 cache.go:232] Successfully downloaded all kic artifacts
	I1019 13:17:21.622048  493482 start.go:360] acquireMachinesLock for embed-certs-834340: {Name:mka158a8ff4f9c1986944dd404295df0d84afabc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:17:21.622106  493482 start.go:364] duration metric: took 37.678µs to acquireMachinesLock for "embed-certs-834340"
	I1019 13:17:21.622132  493482 start.go:96] Skipping create...Using existing machine configuration
	I1019 13:17:21.622138  493482 fix.go:54] fixHost starting: 
	I1019 13:17:21.622406  493482 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:17:21.640545  493482 fix.go:112] recreateIfNeeded on embed-certs-834340: state=Stopped err=<nil>
	W1019 13:17:21.640581  493482 fix.go:138] unexpected machine state, will restart: <nil>
	W1019 13:17:20.055821  490179 node_ready.go:57] node "default-k8s-diff-port-455348" has "Ready":"False" status (will retry)
	W1019 13:17:22.554359  490179 node_ready.go:57] node "default-k8s-diff-port-455348" has "Ready":"False" status (will retry)
	I1019 13:17:21.643798  493482 out.go:252] * Restarting existing docker container for "embed-certs-834340" ...
	I1019 13:17:21.643888  493482 cli_runner.go:164] Run: docker start embed-certs-834340
	I1019 13:17:21.896021  493482 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:17:21.916035  493482 kic.go:430] container "embed-certs-834340" state is running.
	I1019 13:17:21.916442  493482 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-834340
	I1019 13:17:21.942929  493482 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/config.json ...
	I1019 13:17:21.943160  493482 machine.go:93] provisionDockerMachine start ...
	I1019 13:17:21.943227  493482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:17:21.966685  493482 main.go:141] libmachine: Using SSH client type: native
	I1019 13:17:21.967012  493482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1019 13:17:21.967021  493482 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 13:17:21.967710  493482 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1019 13:17:25.122284  493482 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-834340
	
	I1019 13:17:25.122364  493482 ubuntu.go:182] provisioning hostname "embed-certs-834340"
	I1019 13:17:25.122466  493482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:17:25.140620  493482 main.go:141] libmachine: Using SSH client type: native
	I1019 13:17:25.140929  493482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1019 13:17:25.140940  493482 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-834340 && echo "embed-certs-834340" | sudo tee /etc/hostname
	I1019 13:17:25.299218  493482 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-834340
	
	I1019 13:17:25.299291  493482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:17:25.317406  493482 main.go:141] libmachine: Using SSH client type: native
	I1019 13:17:25.317732  493482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1019 13:17:25.317753  493482 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-834340' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-834340/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-834340' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 13:17:25.470442  493482 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 13:17:25.470516  493482 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 13:17:25.470560  493482 ubuntu.go:190] setting up certificates
	I1019 13:17:25.470599  493482 provision.go:84] configureAuth start
	I1019 13:17:25.470703  493482 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-834340
	I1019 13:17:25.487935  493482 provision.go:143] copyHostCerts
	I1019 13:17:25.488011  493482 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem, removing ...
	I1019 13:17:25.488055  493482 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem
	I1019 13:17:25.488134  493482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 13:17:25.488240  493482 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem, removing ...
	I1019 13:17:25.488246  493482 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem
	I1019 13:17:25.488272  493482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 13:17:25.488323  493482 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem, removing ...
	I1019 13:17:25.488328  493482 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem
	I1019 13:17:25.488350  493482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 13:17:25.488394  493482 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.embed-certs-834340 san=[127.0.0.1 192.168.85.2 embed-certs-834340 localhost minikube]
	I1019 13:17:26.258333  493482 provision.go:177] copyRemoteCerts
	I1019 13:17:26.258402  493482 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 13:17:26.258446  493482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:17:26.275414  493482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:17:26.377462  493482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 13:17:26.395569  493482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 13:17:26.413848  493482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 13:17:26.433175  493482 provision.go:87] duration metric: took 962.53562ms to configureAuth
	I1019 13:17:26.433204  493482 ubuntu.go:206] setting minikube options for container-runtime
	I1019 13:17:26.433399  493482 config.go:182] Loaded profile config "embed-certs-834340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:17:26.433507  493482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:17:26.451324  493482 main.go:141] libmachine: Using SSH client type: native
	I1019 13:17:26.451635  493482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1019 13:17:26.451654  493482 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 13:17:26.777793  493482 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 13:17:26.777814  493482 machine.go:96] duration metric: took 4.834645125s to provisionDockerMachine
	I1019 13:17:26.777825  493482 start.go:293] postStartSetup for "embed-certs-834340" (driver="docker")
	I1019 13:17:26.777837  493482 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 13:17:26.777901  493482 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 13:17:26.777940  493482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:17:26.801816  493482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:17:26.909607  493482 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 13:17:26.913019  493482 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 13:17:26.913048  493482 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 13:17:26.913059  493482 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/addons for local assets ...
	I1019 13:17:26.913118  493482 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/files for local assets ...
	I1019 13:17:26.913208  493482 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem -> 2945182.pem in /etc/ssl/certs
	I1019 13:17:26.913315  493482 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 13:17:26.921375  493482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:17:26.939089  493482 start.go:296] duration metric: took 161.248504ms for postStartSetup
	I1019 13:17:26.939242  493482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 13:17:26.939318  493482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:17:26.956570  493482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:17:27.059787  493482 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 13:17:27.064706  493482 fix.go:56] duration metric: took 5.44256078s for fixHost
	I1019 13:17:27.064739  493482 start.go:83] releasing machines lock for "embed-certs-834340", held for 5.442618872s
	I1019 13:17:27.064810  493482 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-834340
	I1019 13:17:27.082353  493482 ssh_runner.go:195] Run: cat /version.json
	I1019 13:17:27.082409  493482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:17:27.082421  493482 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 13:17:27.082486  493482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:17:27.106020  493482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:17:27.115567  493482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:17:27.210178  493482 ssh_runner.go:195] Run: systemctl --version
	I1019 13:17:27.305259  493482 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 13:17:27.342698  493482 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 13:17:27.347229  493482 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 13:17:27.347315  493482 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 13:17:27.355732  493482 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 13:17:27.355758  493482 start.go:495] detecting cgroup driver to use...
	I1019 13:17:27.355790  493482 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 13:17:27.355842  493482 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 13:17:27.371457  493482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 13:17:27.385506  493482 docker.go:218] disabling cri-docker service (if available) ...
	I1019 13:17:27.385606  493482 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 13:17:27.401657  493482 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 13:17:27.416486  493482 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 13:17:27.542178  493482 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 13:17:27.677781  493482 docker.go:234] disabling docker service ...
	I1019 13:17:27.677919  493482 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 13:17:27.692826  493482 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 13:17:27.706498  493482 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 13:17:27.830083  493482 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 13:17:27.956610  493482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 13:17:27.969954  493482 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 13:17:27.984302  493482 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 13:17:27.984375  493482 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:17:27.993231  493482 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 13:17:27.993299  493482 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:17:28.005541  493482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:17:28.015855  493482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:17:28.025396  493482 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 13:17:28.035752  493482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:17:28.045416  493482 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:17:28.055464  493482 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:17:28.064615  493482 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 13:17:28.072663  493482 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 13:17:28.079799  493482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:17:28.200244  493482 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 13:17:28.329269  493482 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 13:17:28.329337  493482 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 13:17:28.333435  493482 start.go:563] Will wait 60s for crictl version
	I1019 13:17:28.333495  493482 ssh_runner.go:195] Run: which crictl
	I1019 13:17:28.338070  493482 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 13:17:28.365327  493482 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 13:17:28.365472  493482 ssh_runner.go:195] Run: crio --version
	I1019 13:17:28.394186  493482 ssh_runner.go:195] Run: crio --version
	I1019 13:17:28.428110  493482 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1019 13:17:24.554637  490179 node_ready.go:57] node "default-k8s-diff-port-455348" has "Ready":"False" status (will retry)
	W1019 13:17:26.555185  490179 node_ready.go:57] node "default-k8s-diff-port-455348" has "Ready":"False" status (will retry)
	W1019 13:17:28.555345  490179 node_ready.go:57] node "default-k8s-diff-port-455348" has "Ready":"False" status (will retry)
	I1019 13:17:28.431000  493482 cli_runner.go:164] Run: docker network inspect embed-certs-834340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:17:28.447535  493482 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 13:17:28.451895  493482 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:17:28.464715  493482 kubeadm.go:883] updating cluster {Name:embed-certs-834340 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-834340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 13:17:28.464847  493482 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:17:28.464931  493482 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:17:28.498261  493482 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:17:28.498285  493482 crio.go:433] Images already preloaded, skipping extraction
	I1019 13:17:28.498339  493482 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:17:28.524346  493482 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:17:28.524372  493482 cache_images.go:85] Images are preloaded, skipping loading
	I1019 13:17:28.524381  493482 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1019 13:17:28.524534  493482 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-834340 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-834340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 13:17:28.524623  493482 ssh_runner.go:195] Run: crio config
	I1019 13:17:28.604752  493482 cni.go:84] Creating CNI manager for ""
	I1019 13:17:28.604781  493482 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:17:28.604799  493482 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 13:17:28.604848  493482 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-834340 NodeName:embed-certs-834340 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 13:17:28.605027  493482 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-834340"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 13:17:28.605122  493482 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 13:17:28.613421  493482 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 13:17:28.613536  493482 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 13:17:28.622333  493482 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1019 13:17:28.636766  493482 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 13:17:28.652148  493482 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1019 13:17:28.665781  493482 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 13:17:28.669303  493482 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:17:28.679334  493482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:17:28.802570  493482 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:17:28.819601  493482 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340 for IP: 192.168.85.2
	I1019 13:17:28.819675  493482 certs.go:195] generating shared ca certs ...
	I1019 13:17:28.819706  493482 certs.go:227] acquiring lock for ca certs: {Name:mk8f2f1c683cf5104ef70f6f3d59bf8f6240d633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:17:28.819903  493482 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key
	I1019 13:17:28.819988  493482 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key
	I1019 13:17:28.820021  493482 certs.go:257] generating profile certs ...
	I1019 13:17:28.820173  493482 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/client.key
	I1019 13:17:28.820283  493482 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.key.21a79282
	I1019 13:17:28.820392  493482 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.key
	I1019 13:17:28.820560  493482 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem (1338 bytes)
	W1019 13:17:28.820637  493482 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518_empty.pem, impossibly tiny 0 bytes
	I1019 13:17:28.820670  493482 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 13:17:28.820733  493482 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem (1082 bytes)
	I1019 13:17:28.820790  493482 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem (1123 bytes)
	I1019 13:17:28.820846  493482 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem (1679 bytes)
	I1019 13:17:28.820925  493482 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:17:28.821655  493482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 13:17:28.841529  493482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 13:17:28.859482  493482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 13:17:28.877388  493482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 13:17:28.895106  493482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1019 13:17:28.917225  493482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 13:17:28.934529  493482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 13:17:28.955066  493482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/embed-certs-834340/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 13:17:28.977000  493482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /usr/share/ca-certificates/2945182.pem (1708 bytes)
	I1019 13:17:29.001774  493482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 13:17:29.028869  493482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem --> /usr/share/ca-certificates/294518.pem (1338 bytes)
	I1019 13:17:29.059670  493482 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 13:17:29.073121  493482 ssh_runner.go:195] Run: openssl version
	I1019 13:17:29.080298  493482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294518.pem && ln -fs /usr/share/ca-certificates/294518.pem /etc/ssl/certs/294518.pem"
	I1019 13:17:29.090478  493482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294518.pem
	I1019 13:17:29.095972  493482 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:20 /usr/share/ca-certificates/294518.pem
	I1019 13:17:29.096127  493482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294518.pem
	I1019 13:17:29.140747  493482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294518.pem /etc/ssl/certs/51391683.0"
	I1019 13:17:29.149523  493482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2945182.pem && ln -fs /usr/share/ca-certificates/2945182.pem /etc/ssl/certs/2945182.pem"
	I1019 13:17:29.159207  493482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2945182.pem
	I1019 13:17:29.163032  493482 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:20 /usr/share/ca-certificates/2945182.pem
	I1019 13:17:29.163117  493482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2945182.pem
	I1019 13:17:29.206645  493482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2945182.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 13:17:29.214528  493482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 13:17:29.223251  493482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:17:29.227014  493482 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:17:29.227121  493482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:17:29.270461  493482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 13:17:29.279124  493482 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 13:17:29.283263  493482 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 13:17:29.324560  493482 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 13:17:29.368633  493482 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 13:17:29.413982  493482 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 13:17:29.459279  493482 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 13:17:29.507032  493482 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 13:17:29.571400  493482 kubeadm.go:400] StartCluster: {Name:embed-certs-834340 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-834340 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:17:29.571494  493482 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 13:17:29.571572  493482 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 13:17:29.656487  493482 cri.go:89] found id: ""
	I1019 13:17:29.656570  493482 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 13:17:29.671227  493482 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 13:17:29.671249  493482 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 13:17:29.671313  493482 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 13:17:29.689288  493482 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 13:17:29.690109  493482 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-834340" does not appear in /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:17:29.690457  493482 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-292654/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-834340" cluster setting kubeconfig missing "embed-certs-834340" context setting]
	I1019 13:17:29.691039  493482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:17:29.693242  493482 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 13:17:29.708709  493482 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 13:17:29.708797  493482 kubeadm.go:601] duration metric: took 37.532037ms to restartPrimaryControlPlane
	I1019 13:17:29.708823  493482 kubeadm.go:402] duration metric: took 137.433282ms to StartCluster
	I1019 13:17:29.708870  493482 settings.go:142] acquiring lock: {Name:mk1099ab6cbf86eca031b5f8e2b43952c9c0f84f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:17:29.708973  493482 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:17:29.710455  493482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:17:29.711143  493482 config.go:182] Loaded profile config "embed-certs-834340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:17:29.711233  493482 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:17:29.711283  493482 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 13:17:29.711611  493482 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-834340"
	I1019 13:17:29.711628  493482 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-834340"
	W1019 13:17:29.711635  493482 addons.go:247] addon storage-provisioner should already be in state true
	I1019 13:17:29.711657  493482 host.go:66] Checking if "embed-certs-834340" exists ...
	I1019 13:17:29.712171  493482 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:17:29.712387  493482 addons.go:69] Setting dashboard=true in profile "embed-certs-834340"
	I1019 13:17:29.712431  493482 addons.go:238] Setting addon dashboard=true in "embed-certs-834340"
	W1019 13:17:29.712443  493482 addons.go:247] addon dashboard should already be in state true
	I1019 13:17:29.712482  493482 host.go:66] Checking if "embed-certs-834340" exists ...
	I1019 13:17:29.712809  493482 addons.go:69] Setting default-storageclass=true in profile "embed-certs-834340"
	I1019 13:17:29.712853  493482 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-834340"
	I1019 13:17:29.712926  493482 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:17:29.713202  493482 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:17:29.721148  493482 out.go:179] * Verifying Kubernetes components...
	I1019 13:17:29.726683  493482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:17:29.760266  493482 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 13:17:29.763797  493482 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 13:17:29.767184  493482 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 13:17:29.771554  493482 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 13:17:29.771586  493482 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 13:17:29.772180  493482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:17:29.772417  493482 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:17:29.772429  493482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 13:17:29.772512  493482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:17:29.789089  493482 addons.go:238] Setting addon default-storageclass=true in "embed-certs-834340"
	W1019 13:17:29.789112  493482 addons.go:247] addon default-storageclass should already be in state true
	I1019 13:17:29.789135  493482 host.go:66] Checking if "embed-certs-834340" exists ...
	I1019 13:17:29.789573  493482 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:17:29.821812  493482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:17:29.843490  493482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:17:29.857866  493482 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 13:17:29.857893  493482 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 13:17:29.857967  493482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:17:29.884885  493482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:17:30.060719  493482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:17:30.168185  493482 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:17:30.181335  493482 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 13:17:30.181425  493482 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 13:17:30.246624  493482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 13:17:30.271644  493482 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 13:17:30.271722  493482 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 13:17:30.331008  493482 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 13:17:30.331087  493482 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 13:17:30.399095  493482 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 13:17:30.399174  493482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 13:17:30.416485  493482 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 13:17:30.416564  493482 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 13:17:30.431285  493482 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 13:17:30.431354  493482 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 13:17:30.448146  493482 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 13:17:30.448219  493482 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 13:17:30.463302  493482 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 13:17:30.463374  493482 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 13:17:30.480033  493482 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 13:17:30.480116  493482 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 13:17:30.499045  493482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1019 13:17:31.055250  490179 node_ready.go:57] node "default-k8s-diff-port-455348" has "Ready":"False" status (will retry)
	W1019 13:17:33.555304  490179 node_ready.go:57] node "default-k8s-diff-port-455348" has "Ready":"False" status (will retry)
	I1019 13:17:36.971493  493482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.910683158s)
	I1019 13:17:36.971539  493482 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.803291417s)
	I1019 13:17:36.971569  493482 node_ready.go:35] waiting up to 6m0s for node "embed-certs-834340" to be "Ready" ...
	I1019 13:17:36.971863  493482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.725164752s)
	I1019 13:17:37.011686  493482 node_ready.go:49] node "embed-certs-834340" is "Ready"
	I1019 13:17:37.011772  493482 node_ready.go:38] duration metric: took 40.189001ms for node "embed-certs-834340" to be "Ready" ...
	I1019 13:17:37.011803  493482 api_server.go:52] waiting for apiserver process to appear ...
	I1019 13:17:37.011899  493482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 13:17:37.058182  493482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.559033097s)
	I1019 13:17:37.058415  493482 api_server.go:72] duration metric: took 7.347046321s to wait for apiserver process to appear ...
	I1019 13:17:37.058432  493482 api_server.go:88] waiting for apiserver healthz status ...
	I1019 13:17:37.058450  493482 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 13:17:37.061533  493482 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-834340 addons enable metrics-server
	
	I1019 13:17:37.064672  493482 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1019 13:17:36.055012  490179 node_ready.go:57] node "default-k8s-diff-port-455348" has "Ready":"False" status (will retry)
	W1019 13:17:38.554796  490179 node_ready.go:57] node "default-k8s-diff-port-455348" has "Ready":"False" status (will retry)
	I1019 13:17:37.068158  493482 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 13:17:37.068182  493482 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 13:17:37.068278  493482 addons.go:514] duration metric: took 7.356986779s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1019 13:17:37.558605  493482 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 13:17:37.567062  493482 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1019 13:17:37.568175  493482 api_server.go:141] control plane version: v1.34.1
	I1019 13:17:37.568203  493482 api_server.go:131] duration metric: took 509.759969ms to wait for apiserver health ...
	I1019 13:17:37.568213  493482 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 13:17:37.571205  493482 system_pods.go:59] 8 kube-system pods found
	I1019 13:17:37.571244  493482 system_pods.go:61] "coredns-66bc5c9577-sgj8p" [ba81b6cb-a1c6-4d8f-9fd8-33c80b505be0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:17:37.571253  493482 system_pods.go:61] "etcd-embed-certs-834340" [53d9ce2f-a823-4b6d-b629-d672a8986024] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:17:37.571259  493482 system_pods.go:61] "kindnet-cbzm8" [9919aa81-5732-4d82-834f-eecd379ff767] Running
	I1019 13:17:37.571269  493482 system_pods.go:61] "kube-apiserver-embed-certs-834340" [ad0eb2ab-9de5-4a93-bb62-974384a4312f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:17:37.571281  493482 system_pods.go:61] "kube-controller-manager-embed-certs-834340" [8a517576-4b00-4537-a6fa-192a9d0839ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 13:17:37.571286  493482 system_pods.go:61] "kube-proxy-2skj7" [7f512885-261d-45a8-9870-c7f00e96dc43] Running
	I1019 13:17:37.571296  493482 system_pods.go:61] "kube-scheduler-embed-certs-834340" [1b45002b-6861-45b0-928d-da16cc52d739] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 13:17:37.571302  493482 system_pods.go:61] "storage-provisioner" [02bc630b-7545-484e-97c0-2918b40a150e] Running
	I1019 13:17:37.571311  493482 system_pods.go:74] duration metric: took 3.092087ms to wait for pod list to return data ...
	I1019 13:17:37.571318  493482 default_sa.go:34] waiting for default service account to be created ...
	I1019 13:17:37.573519  493482 default_sa.go:45] found service account: "default"
	I1019 13:17:37.573543  493482 default_sa.go:55] duration metric: took 2.218877ms for default service account to be created ...
	I1019 13:17:37.573552  493482 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 13:17:37.576156  493482 system_pods.go:86] 8 kube-system pods found
	I1019 13:17:37.576187  493482 system_pods.go:89] "coredns-66bc5c9577-sgj8p" [ba81b6cb-a1c6-4d8f-9fd8-33c80b505be0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:17:37.576197  493482 system_pods.go:89] "etcd-embed-certs-834340" [53d9ce2f-a823-4b6d-b629-d672a8986024] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:17:37.576203  493482 system_pods.go:89] "kindnet-cbzm8" [9919aa81-5732-4d82-834f-eecd379ff767] Running
	I1019 13:17:37.576210  493482 system_pods.go:89] "kube-apiserver-embed-certs-834340" [ad0eb2ab-9de5-4a93-bb62-974384a4312f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:17:37.576219  493482 system_pods.go:89] "kube-controller-manager-embed-certs-834340" [8a517576-4b00-4537-a6fa-192a9d0839ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 13:17:37.576224  493482 system_pods.go:89] "kube-proxy-2skj7" [7f512885-261d-45a8-9870-c7f00e96dc43] Running
	I1019 13:17:37.576234  493482 system_pods.go:89] "kube-scheduler-embed-certs-834340" [1b45002b-6861-45b0-928d-da16cc52d739] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 13:17:37.576239  493482 system_pods.go:89] "storage-provisioner" [02bc630b-7545-484e-97c0-2918b40a150e] Running
	I1019 13:17:37.576248  493482 system_pods.go:126] duration metric: took 2.691335ms to wait for k8s-apps to be running ...
	I1019 13:17:37.576259  493482 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 13:17:37.576315  493482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:17:37.589511  493482 system_svc.go:56] duration metric: took 13.243347ms WaitForService to wait for kubelet
	I1019 13:17:37.589537  493482 kubeadm.go:586] duration metric: took 7.878169106s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:17:37.589555  493482 node_conditions.go:102] verifying NodePressure condition ...
	I1019 13:17:37.592649  493482 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 13:17:37.592680  493482 node_conditions.go:123] node cpu capacity is 2
	I1019 13:17:37.592693  493482 node_conditions.go:105] duration metric: took 3.133032ms to run NodePressure ...
	I1019 13:17:37.592706  493482 start.go:241] waiting for startup goroutines ...
	I1019 13:17:37.592713  493482 start.go:246] waiting for cluster config update ...
	I1019 13:17:37.592724  493482 start.go:255] writing updated cluster config ...
	I1019 13:17:37.593017  493482 ssh_runner.go:195] Run: rm -f paused
	I1019 13:17:37.600914  493482 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:17:37.605137  493482 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sgj8p" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 13:17:39.636547  493482 pod_ready.go:104] pod "coredns-66bc5c9577-sgj8p" is not "Ready", error: <nil>
	W1019 13:17:40.555577  490179 node_ready.go:57] node "default-k8s-diff-port-455348" has "Ready":"False" status (will retry)
	W1019 13:17:42.555754  490179 node_ready.go:57] node "default-k8s-diff-port-455348" has "Ready":"False" status (will retry)
	W1019 13:17:42.120819  493482 pod_ready.go:104] pod "coredns-66bc5c9577-sgj8p" is not "Ready", error: <nil>
	W1019 13:17:44.130254  493482 pod_ready.go:104] pod "coredns-66bc5c9577-sgj8p" is not "Ready", error: <nil>
	W1019 13:17:45.063179  490179 node_ready.go:57] node "default-k8s-diff-port-455348" has "Ready":"False" status (will retry)
	W1019 13:17:47.561269  490179 node_ready.go:57] node "default-k8s-diff-port-455348" has "Ready":"False" status (will retry)
	W1019 13:17:46.611586  493482 pod_ready.go:104] pod "coredns-66bc5c9577-sgj8p" is not "Ready", error: <nil>
	W1019 13:17:48.612035  493482 pod_ready.go:104] pod "coredns-66bc5c9577-sgj8p" is not "Ready", error: <nil>
	W1019 13:17:51.111201  493482 pod_ready.go:104] pod "coredns-66bc5c9577-sgj8p" is not "Ready", error: <nil>
	W1019 13:17:50.056698  490179 node_ready.go:57] node "default-k8s-diff-port-455348" has "Ready":"False" status (will retry)
	W1019 13:17:52.554527  490179 node_ready.go:57] node "default-k8s-diff-port-455348" has "Ready":"False" status (will retry)
	W1019 13:17:53.610983  493482 pod_ready.go:104] pod "coredns-66bc5c9577-sgj8p" is not "Ready", error: <nil>
	W1019 13:17:55.611720  493482 pod_ready.go:104] pod "coredns-66bc5c9577-sgj8p" is not "Ready", error: <nil>
	W1019 13:17:54.555332  490179 node_ready.go:57] node "default-k8s-diff-port-455348" has "Ready":"False" status (will retry)
	I1019 13:17:55.554978  490179 node_ready.go:49] node "default-k8s-diff-port-455348" is "Ready"
	I1019 13:17:55.555010  490179 node_ready.go:38] duration metric: took 40.0032486s for node "default-k8s-diff-port-455348" to be "Ready" ...
	I1019 13:17:55.555024  490179 api_server.go:52] waiting for apiserver process to appear ...
	I1019 13:17:55.555083  490179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 13:17:55.568058  490179 api_server.go:72] duration metric: took 40.817561286s to wait for apiserver process to appear ...
	I1019 13:17:55.568087  490179 api_server.go:88] waiting for apiserver healthz status ...
	I1019 13:17:55.568107  490179 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1019 13:17:55.576791  490179 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1019 13:17:55.578107  490179 api_server.go:141] control plane version: v1.34.1
	I1019 13:17:55.578139  490179 api_server.go:131] duration metric: took 10.044107ms to wait for apiserver health ...
	I1019 13:17:55.578149  490179 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 13:17:55.584373  490179 system_pods.go:59] 8 kube-system pods found
	I1019 13:17:55.584415  490179 system_pods.go:61] "coredns-66bc5c9577-qn68x" [ec110a63-3a4a-4459-b52f-91f5bbc3040c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:17:55.584423  490179 system_pods.go:61] "etcd-default-k8s-diff-port-455348" [fbed1466-c3ec-408e-a585-1161333eb770] Running
	I1019 13:17:55.584429  490179 system_pods.go:61] "kindnet-m2tx2" [a29cf050-9838-4f87-b000-1bc588bc226e] Running
	I1019 13:17:55.584434  490179 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-455348" [fbca8027-d2e0-47b0-9ec6-d34db77afb1b] Running
	I1019 13:17:55.584439  490179 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-455348" [72ef8b73-4a4e-471d-9f80-8b8c56b15148] Running
	I1019 13:17:55.584443  490179 system_pods.go:61] "kube-proxy-vbd99" [856b676a-25aa-48b5-ad14-043c61758179] Running
	I1019 13:17:55.584448  490179 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-455348" [cb0881af-73f5-43fe-a786-efb577036c6f] Running
	I1019 13:17:55.584454  490179 system_pods.go:61] "storage-provisioner" [dadf6eac-8768-45de-aea6-a3ca3f518c9d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 13:17:55.584462  490179 system_pods.go:74] duration metric: took 6.307294ms to wait for pod list to return data ...
	I1019 13:17:55.584471  490179 default_sa.go:34] waiting for default service account to be created ...
	I1019 13:17:55.609169  490179 default_sa.go:45] found service account: "default"
	I1019 13:17:55.609198  490179 default_sa.go:55] duration metric: took 24.714502ms for default service account to be created ...
	I1019 13:17:55.609207  490179 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 13:17:55.614430  490179 system_pods.go:86] 8 kube-system pods found
	I1019 13:17:55.614469  490179 system_pods.go:89] "coredns-66bc5c9577-qn68x" [ec110a63-3a4a-4459-b52f-91f5bbc3040c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:17:55.614475  490179 system_pods.go:89] "etcd-default-k8s-diff-port-455348" [fbed1466-c3ec-408e-a585-1161333eb770] Running
	I1019 13:17:55.614481  490179 system_pods.go:89] "kindnet-m2tx2" [a29cf050-9838-4f87-b000-1bc588bc226e] Running
	I1019 13:17:55.614486  490179 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-455348" [fbca8027-d2e0-47b0-9ec6-d34db77afb1b] Running
	I1019 13:17:55.614490  490179 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-455348" [72ef8b73-4a4e-471d-9f80-8b8c56b15148] Running
	I1019 13:17:55.614494  490179 system_pods.go:89] "kube-proxy-vbd99" [856b676a-25aa-48b5-ad14-043c61758179] Running
	I1019 13:17:55.614500  490179 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-455348" [cb0881af-73f5-43fe-a786-efb577036c6f] Running
	I1019 13:17:55.614505  490179 system_pods.go:89] "storage-provisioner" [dadf6eac-8768-45de-aea6-a3ca3f518c9d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 13:17:55.614528  490179 retry.go:31] will retry after 279.530937ms: missing components: kube-dns
	I1019 13:17:55.898917  490179 system_pods.go:86] 8 kube-system pods found
	I1019 13:17:55.898948  490179 system_pods.go:89] "coredns-66bc5c9577-qn68x" [ec110a63-3a4a-4459-b52f-91f5bbc3040c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:17:55.898955  490179 system_pods.go:89] "etcd-default-k8s-diff-port-455348" [fbed1466-c3ec-408e-a585-1161333eb770] Running
	I1019 13:17:55.898961  490179 system_pods.go:89] "kindnet-m2tx2" [a29cf050-9838-4f87-b000-1bc588bc226e] Running
	I1019 13:17:55.898975  490179 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-455348" [fbca8027-d2e0-47b0-9ec6-d34db77afb1b] Running
	I1019 13:17:55.898981  490179 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-455348" [72ef8b73-4a4e-471d-9f80-8b8c56b15148] Running
	I1019 13:17:55.898986  490179 system_pods.go:89] "kube-proxy-vbd99" [856b676a-25aa-48b5-ad14-043c61758179] Running
	I1019 13:17:55.898990  490179 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-455348" [cb0881af-73f5-43fe-a786-efb577036c6f] Running
	I1019 13:17:55.898995  490179 system_pods.go:89] "storage-provisioner" [dadf6eac-8768-45de-aea6-a3ca3f518c9d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 13:17:55.899016  490179 retry.go:31] will retry after 374.770385ms: missing components: kube-dns
	I1019 13:17:56.278309  490179 system_pods.go:86] 8 kube-system pods found
	I1019 13:17:56.278348  490179 system_pods.go:89] "coredns-66bc5c9577-qn68x" [ec110a63-3a4a-4459-b52f-91f5bbc3040c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:17:56.278358  490179 system_pods.go:89] "etcd-default-k8s-diff-port-455348" [fbed1466-c3ec-408e-a585-1161333eb770] Running
	I1019 13:17:56.278364  490179 system_pods.go:89] "kindnet-m2tx2" [a29cf050-9838-4f87-b000-1bc588bc226e] Running
	I1019 13:17:56.278369  490179 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-455348" [fbca8027-d2e0-47b0-9ec6-d34db77afb1b] Running
	I1019 13:17:56.278373  490179 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-455348" [72ef8b73-4a4e-471d-9f80-8b8c56b15148] Running
	I1019 13:17:56.278378  490179 system_pods.go:89] "kube-proxy-vbd99" [856b676a-25aa-48b5-ad14-043c61758179] Running
	I1019 13:17:56.278382  490179 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-455348" [cb0881af-73f5-43fe-a786-efb577036c6f] Running
	I1019 13:17:56.278388  490179 system_pods.go:89] "storage-provisioner" [dadf6eac-8768-45de-aea6-a3ca3f518c9d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 13:17:56.278408  490179 retry.go:31] will retry after 438.459234ms: missing components: kube-dns
	I1019 13:17:56.721769  490179 system_pods.go:86] 8 kube-system pods found
	I1019 13:17:56.721805  490179 system_pods.go:89] "coredns-66bc5c9577-qn68x" [ec110a63-3a4a-4459-b52f-91f5bbc3040c] Running
	I1019 13:17:56.721812  490179 system_pods.go:89] "etcd-default-k8s-diff-port-455348" [fbed1466-c3ec-408e-a585-1161333eb770] Running
	I1019 13:17:56.721819  490179 system_pods.go:89] "kindnet-m2tx2" [a29cf050-9838-4f87-b000-1bc588bc226e] Running
	I1019 13:17:56.721824  490179 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-455348" [fbca8027-d2e0-47b0-9ec6-d34db77afb1b] Running
	I1019 13:17:56.721829  490179 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-455348" [72ef8b73-4a4e-471d-9f80-8b8c56b15148] Running
	I1019 13:17:56.721833  490179 system_pods.go:89] "kube-proxy-vbd99" [856b676a-25aa-48b5-ad14-043c61758179] Running
	I1019 13:17:56.721844  490179 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-455348" [cb0881af-73f5-43fe-a786-efb577036c6f] Running
	I1019 13:17:56.721852  490179 system_pods.go:89] "storage-provisioner" [dadf6eac-8768-45de-aea6-a3ca3f518c9d] Running
	I1019 13:17:56.721860  490179 system_pods.go:126] duration metric: took 1.112646678s to wait for k8s-apps to be running ...
	I1019 13:17:56.721873  490179 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 13:17:56.721945  490179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:17:56.737599  490179 system_svc.go:56] duration metric: took 15.717448ms WaitForService to wait for kubelet
	I1019 13:17:56.737629  490179 kubeadm.go:586] duration metric: took 41.987137353s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:17:56.737647  490179 node_conditions.go:102] verifying NodePressure condition ...
	I1019 13:17:56.740770  490179 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 13:17:56.740805  490179 node_conditions.go:123] node cpu capacity is 2
	I1019 13:17:56.740822  490179 node_conditions.go:105] duration metric: took 3.168002ms to run NodePressure ...
	I1019 13:17:56.740836  490179 start.go:241] waiting for startup goroutines ...
	I1019 13:17:56.740843  490179 start.go:246] waiting for cluster config update ...
	I1019 13:17:56.740858  490179 start.go:255] writing updated cluster config ...
	I1019 13:17:56.741152  490179 ssh_runner.go:195] Run: rm -f paused
	I1019 13:17:56.744968  490179 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:17:56.749109  490179 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qn68x" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:17:56.761851  490179 pod_ready.go:94] pod "coredns-66bc5c9577-qn68x" is "Ready"
	I1019 13:17:56.761881  490179 pod_ready.go:86] duration metric: took 12.74309ms for pod "coredns-66bc5c9577-qn68x" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:17:56.764349  490179 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-455348" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:17:56.769154  490179 pod_ready.go:94] pod "etcd-default-k8s-diff-port-455348" is "Ready"
	I1019 13:17:56.769182  490179 pod_ready.go:86] duration metric: took 4.806966ms for pod "etcd-default-k8s-diff-port-455348" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:17:56.771610  490179 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-455348" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:17:56.776441  490179 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-455348" is "Ready"
	I1019 13:17:56.776468  490179 pod_ready.go:86] duration metric: took 4.832197ms for pod "kube-apiserver-default-k8s-diff-port-455348" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:17:56.778865  490179 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-455348" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:17:57.149901  490179 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-455348" is "Ready"
	I1019 13:17:57.149977  490179 pod_ready.go:86] duration metric: took 371.085856ms for pod "kube-controller-manager-default-k8s-diff-port-455348" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:17:57.349025  490179 pod_ready.go:83] waiting for pod "kube-proxy-vbd99" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:17:57.749011  490179 pod_ready.go:94] pod "kube-proxy-vbd99" is "Ready"
	I1019 13:17:57.749042  490179 pod_ready.go:86] duration metric: took 399.986655ms for pod "kube-proxy-vbd99" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:17:57.950621  490179 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-455348" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:17:58.348991  490179 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-455348" is "Ready"
	I1019 13:17:58.349020  490179 pod_ready.go:86] duration metric: took 398.372192ms for pod "kube-scheduler-default-k8s-diff-port-455348" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:17:58.349033  490179 pod_ready.go:40] duration metric: took 1.604035183s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:17:58.399716  490179 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 13:17:58.402985  490179 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-455348" cluster and "default" namespace by default
	W1019 13:17:58.111432  493482 pod_ready.go:104] pod "coredns-66bc5c9577-sgj8p" is not "Ready", error: <nil>
	W1019 13:18:00.158885  493482 pod_ready.go:104] pod "coredns-66bc5c9577-sgj8p" is not "Ready", error: <nil>
	W1019 13:18:02.610389  493482 pod_ready.go:104] pod "coredns-66bc5c9577-sgj8p" is not "Ready", error: <nil>
	W1019 13:18:04.611272  493482 pod_ready.go:104] pod "coredns-66bc5c9577-sgj8p" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 19 13:17:55 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:17:55.805073725Z" level=info msg="Created container 1121336f1e778f53fee54e9691bf2037384187e5bf3293a6b3c54e77680018dd: kube-system/coredns-66bc5c9577-qn68x/coredns" id=247b99ff-b8c2-4e76-a9f0-7e5894c0c73a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:17:55 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:17:55.805723096Z" level=info msg="Starting container: 1121336f1e778f53fee54e9691bf2037384187e5bf3293a6b3c54e77680018dd" id=dc1257ee-6278-40f3-a685-020c4aeb4fe7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:17:55 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:17:55.813005919Z" level=info msg="Started container" PID=1740 containerID=1121336f1e778f53fee54e9691bf2037384187e5bf3293a6b3c54e77680018dd description=kube-system/coredns-66bc5c9577-qn68x/coredns id=dc1257ee-6278-40f3-a685-020c4aeb4fe7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2ded1bc671b2e28d12340d1ed0b56233e100a81eb3aa7605adcc41aef9a0941f
	Oct 19 13:17:58 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:17:58.942941217Z" level=info msg="Running pod sandbox: default/busybox/POD" id=05c8654a-eb4b-4a7e-8399-e00f285309ef name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:17:58 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:17:58.94300969Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:17:58 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:17:58.948808193Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1b8daead30d168a928aa83c34b3d358a6096777fb9bc5975dcd303a199390925 UID:be6a9614-a438-46fe-8247-1f3e80f868a4 NetNS:/var/run/netns/06bb15a5-1f07-4059-ae41-7ee9a0a9f902 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004c6810}] Aliases:map[]}"
	Oct 19 13:17:58 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:17:58.948845404Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 19 13:17:58 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:17:58.958629652Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1b8daead30d168a928aa83c34b3d358a6096777fb9bc5975dcd303a199390925 UID:be6a9614-a438-46fe-8247-1f3e80f868a4 NetNS:/var/run/netns/06bb15a5-1f07-4059-ae41-7ee9a0a9f902 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004c6810}] Aliases:map[]}"
	Oct 19 13:17:58 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:17:58.958789276Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 19 13:17:58 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:17:58.962210213Z" level=info msg="Ran pod sandbox 1b8daead30d168a928aa83c34b3d358a6096777fb9bc5975dcd303a199390925 with infra container: default/busybox/POD" id=05c8654a-eb4b-4a7e-8399-e00f285309ef name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:17:58 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:17:58.964892449Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=09a5abb7-d664-4575-a167-8ac155ae83ea name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:17:58 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:17:58.965173913Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=09a5abb7-d664-4575-a167-8ac155ae83ea name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:17:58 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:17:58.965328311Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=09a5abb7-d664-4575-a167-8ac155ae83ea name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:17:58 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:17:58.970215631Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c54ba7b6-a293-4f2b-bb2d-ff44ba5c2a33 name=/runtime.v1.ImageService/PullImage
	Oct 19 13:17:58 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:17:58.973389819Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 13:18:00 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:18:00.950528925Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=c54ba7b6-a293-4f2b-bb2d-ff44ba5c2a33 name=/runtime.v1.ImageService/PullImage
	Oct 19 13:18:00 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:18:00.951489464Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=06ae3a83-136e-42a9-a4c1-1a7e9b045f69 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:18:00 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:18:00.955089119Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4dc0e070-11f7-4502-9811-3d1b69243700 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:18:00 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:18:00.960871089Z" level=info msg="Creating container: default/busybox/busybox" id=2c07d7d3-038b-4d2f-b7c2-77df7903fa01 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:18:00 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:18:00.961650398Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:18:00 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:18:00.966544364Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:18:00 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:18:00.967345425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:18:00 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:18:00.982746512Z" level=info msg="Created container 07326f724462dd44d3b65857fd796eb4ef2eed54e11856e7c22e6763f4e45670: default/busybox/busybox" id=2c07d7d3-038b-4d2f-b7c2-77df7903fa01 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:18:00 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:18:00.985337431Z" level=info msg="Starting container: 07326f724462dd44d3b65857fd796eb4ef2eed54e11856e7c22e6763f4e45670" id=2d5462d5-9d0a-4161-9ebe-a05bcc8528a1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:18:00 default-k8s-diff-port-455348 crio[838]: time="2025-10-19T13:18:00.989217862Z" level=info msg="Started container" PID=1793 containerID=07326f724462dd44d3b65857fd796eb4ef2eed54e11856e7c22e6763f4e45670 description=default/busybox/busybox id=2d5462d5-9d0a-4161-9ebe-a05bcc8528a1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b8daead30d168a928aa83c34b3d358a6096777fb9bc5975dcd303a199390925
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	07326f724462d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   1b8daead30d16       busybox                                                default
	1121336f1e778       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   2ded1bc671b2e       coredns-66bc5c9577-qn68x                               kube-system
	6906481c53bb9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   af9c121c5c029       storage-provisioner                                    kube-system
	773f1e607e6a3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   31967f25dd8f7       kube-proxy-vbd99                                       kube-system
	701f0a7451faa       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   250628eb44168       kindnet-m2tx2                                          kube-system
	d67d884ff92de       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   2d3c84bcb9196       kube-apiserver-default-k8s-diff-port-455348            kube-system
	5d8de10aa3c12       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   ebf1fc54fb88f       kube-scheduler-default-k8s-diff-port-455348            kube-system
	a6c60f50557dd       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   8b5bcbdc99c18       kube-controller-manager-default-k8s-diff-port-455348   kube-system
	22d5312b6fe3e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   a8fcc7b03bf52       etcd-default-k8s-diff-port-455348                      kube-system
	
	
	==> coredns [1121336f1e778f53fee54e9691bf2037384187e5bf3293a6b3c54e77680018dd] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38104 - 63025 "HINFO IN 3771491169258589253.691655695035722698. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011287317s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-455348
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-455348
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=default-k8s-diff-port-455348
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T13_17_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 13:17:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-455348
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 13:18:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 13:18:00 +0000   Sun, 19 Oct 2025 13:17:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 13:18:00 +0000   Sun, 19 Oct 2025 13:17:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 13:18:00 +0000   Sun, 19 Oct 2025 13:17:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 13:18:00 +0000   Sun, 19 Oct 2025 13:17:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-455348
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                274325ea-a55a-4ae3-bfda-c03acb1cf740
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-qn68x                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 etcd-default-k8s-diff-port-455348                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         59s
	  kube-system                 kindnet-m2tx2                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-default-k8s-diff-port-455348             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-455348    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-vbd99                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-default-k8s-diff-port-455348             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   Starting                 67s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)  kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)  kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)  kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasSufficientPID
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node default-k8s-diff-port-455348 event: Registered Node default-k8s-diff-port-455348 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-455348 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct19 12:54] overlayfs: idmapped layers are currently not supported
	[Oct19 12:56] overlayfs: idmapped layers are currently not supported
	[ +16.315179] overlayfs: idmapped layers are currently not supported
	[ +11.914063] overlayfs: idmapped layers are currently not supported
	[Oct19 12:57] overlayfs: idmapped layers are currently not supported
	[Oct19 12:58] overlayfs: idmapped layers are currently not supported
	[ +48.481184] overlayfs: idmapped layers are currently not supported
	[Oct19 12:59] overlayfs: idmapped layers are currently not supported
	[Oct19 13:00] overlayfs: idmapped layers are currently not supported
	[Oct19 13:01] overlayfs: idmapped layers are currently not supported
	[Oct19 13:04] overlayfs: idmapped layers are currently not supported
	[Oct19 13:05] overlayfs: idmapped layers are currently not supported
	[Oct19 13:06] overlayfs: idmapped layers are currently not supported
	[Oct19 13:08] overlayfs: idmapped layers are currently not supported
	[ +38.759554] overlayfs: idmapped layers are currently not supported
	[Oct19 13:10] overlayfs: idmapped layers are currently not supported
	[Oct19 13:11] overlayfs: idmapped layers are currently not supported
	[Oct19 13:12] overlayfs: idmapped layers are currently not supported
	[ +39.991818] overlayfs: idmapped layers are currently not supported
	[Oct19 13:13] overlayfs: idmapped layers are currently not supported
	[Oct19 13:14] overlayfs: idmapped layers are currently not supported
	[Oct19 13:15] overlayfs: idmapped layers are currently not supported
	[ +34.413925] overlayfs: idmapped layers are currently not supported
	[Oct19 13:17] overlayfs: idmapped layers are currently not supported
	[ +27.716246] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [22d5312b6fe3ecab288871b282d39387c7bd6fd9ce6c9031e96691ce74ef2c4d] <==
	{"level":"warn","ts":"2025-10-19T13:17:04.149467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.188817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.210084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.228484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.247825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.264347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.287154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.314262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.330589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.348415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.368401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.386347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.411910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.430119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.442921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.475489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.506525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.522219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.541007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.553324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.576077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.604599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.617912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.638035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:04.716414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58822","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:18:08 up  3:00,  0 user,  load average: 3.37, 3.32, 2.83
	Linux default-k8s-diff-port-455348 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [701f0a7451faa889759ab4257afdc37e0794335a8b7df1cebc06f176451b9d0d] <==
	I1019 13:17:14.839500       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:17:14.839724       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 13:17:14.839847       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:17:14.839858       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:17:14.839870       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:17:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:17:15.012791       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:17:15.012823       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:17:15.012832       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:17:15.013192       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 13:17:45.011306       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1019 13:17:45.024861       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 13:17:45.024990       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 13:17:45.025070       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1019 13:17:46.415632       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 13:17:46.415670       1 metrics.go:72] Registering metrics
	I1019 13:17:46.415734       1 controller.go:711] "Syncing nftables rules"
	I1019 13:17:55.017230       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:17:55.017294       1 main.go:301] handling current node
	I1019 13:18:05.012323       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:18:05.012364       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d67d884ff92de0498de534f70b806f3cbca19addca177f8ac9757e6ca6272226] <==
	I1019 13:17:05.595419       1 controller.go:667] quota admission added evaluator for: namespaces
	E1019 13:17:05.598010       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1019 13:17:05.711286       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:17:05.711340       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1019 13:17:05.718854       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:17:05.724500       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 13:17:05.804949       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 13:17:06.291162       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 13:17:06.301912       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 13:17:06.301938       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 13:17:07.716873       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 13:17:07.782734       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 13:17:07.907825       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 13:17:07.925085       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1019 13:17:07.926766       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 13:17:07.936652       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 13:17:08.471430       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 13:17:08.954805       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 13:17:08.980920       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 13:17:09.002980       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 13:17:14.161963       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 13:17:14.209590       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1019 13:17:14.286183       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:17:14.304584       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1019 13:18:06.759636       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:47716: use of closed network connection
	
	
	==> kube-controller-manager [a6c60f50557dd81934b7619bf2348394559ff25f1e3f26745b9489f29183349a] <==
	I1019 13:17:13.457643       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 13:17:13.457787       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 13:17:13.462041       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 13:17:13.462111       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 13:17:13.462149       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 13:17:13.462156       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 13:17:13.462216       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 13:17:13.462245       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 13:17:13.462351       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 13:17:13.462441       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 13:17:13.462542       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-455348"
	I1019 13:17:13.462605       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 13:17:13.466647       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:17:13.481775       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 13:17:13.481879       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 13:17:13.495416       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-455348" podCIDRs=["10.244.0.0/24"]
	I1019 13:17:13.495881       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:17:13.501970       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 13:17:13.506725       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 13:17:13.506861       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 13:17:13.506924       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 13:17:13.508043       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 13:17:13.508132       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 13:17:13.516085       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:17:58.470773       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [773f1e607e6a31c4e8129f646e6ca4587dd804abe69431a0a62588431bb97da8] <==
	I1019 13:17:14.998272       1 server_linux.go:53] "Using iptables proxy"
	I1019 13:17:15.151418       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 13:17:15.251538       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 13:17:15.251572       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 13:17:15.251650       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 13:17:15.282221       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:17:15.282276       1 server_linux.go:132] "Using iptables Proxier"
	I1019 13:17:15.291649       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 13:17:15.292157       1 server.go:527] "Version info" version="v1.34.1"
	I1019 13:17:15.292369       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:17:15.293971       1 config.go:200] "Starting service config controller"
	I1019 13:17:15.294027       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 13:17:15.294083       1 config.go:106] "Starting endpoint slice config controller"
	I1019 13:17:15.294123       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 13:17:15.294159       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 13:17:15.294193       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 13:17:15.309831       1 config.go:309] "Starting node config controller"
	I1019 13:17:15.309848       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 13:17:15.309855       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 13:17:15.395536       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 13:17:15.395573       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 13:17:15.395609       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5d8de10aa3c12f24a83ffc8b0944820a0d6811f8847511846422ce3857cf6352] <==
	E1019 13:17:05.538667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 13:17:05.538763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 13:17:05.538802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 13:17:05.538839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 13:17:05.544867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1019 13:17:06.548334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 13:17:06.558365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 13:17:06.558778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 13:17:06.562048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 13:17:06.652915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 13:17:06.740754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 13:17:06.770785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 13:17:06.784975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 13:17:06.830199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 13:17:06.848298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 13:17:06.860318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 13:17:06.860477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 13:17:06.933614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 13:17:06.939626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 13:17:06.965280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 13:17:06.996378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1019 13:17:07.081905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 13:17:07.090371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 13:17:07.116677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1019 13:17:09.112604       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 13:17:10 default-k8s-diff-port-455348 kubelet[1295]: E1019 13:17:10.300628    1295 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-455348\" already exists" pod="kube-system/etcd-default-k8s-diff-port-455348"
	Oct 19 13:17:13 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:13.566206    1295 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 19 13:17:13 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:13.567112    1295 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 19 13:17:14 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:14.360362    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a29cf050-9838-4f87-b000-1bc588bc226e-lib-modules\") pod \"kindnet-m2tx2\" (UID: \"a29cf050-9838-4f87-b000-1bc588bc226e\") " pod="kube-system/kindnet-m2tx2"
	Oct 19 13:17:14 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:14.360771    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/856b676a-25aa-48b5-ad14-043c61758179-lib-modules\") pod \"kube-proxy-vbd99\" (UID: \"856b676a-25aa-48b5-ad14-043c61758179\") " pod="kube-system/kube-proxy-vbd99"
	Oct 19 13:17:14 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:14.360913    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wbvc\" (UniqueName: \"kubernetes.io/projected/856b676a-25aa-48b5-ad14-043c61758179-kube-api-access-7wbvc\") pod \"kube-proxy-vbd99\" (UID: \"856b676a-25aa-48b5-ad14-043c61758179\") " pod="kube-system/kube-proxy-vbd99"
	Oct 19 13:17:14 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:14.361035    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a29cf050-9838-4f87-b000-1bc588bc226e-xtables-lock\") pod \"kindnet-m2tx2\" (UID: \"a29cf050-9838-4f87-b000-1bc588bc226e\") " pod="kube-system/kindnet-m2tx2"
	Oct 19 13:17:14 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:14.361141    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/856b676a-25aa-48b5-ad14-043c61758179-kube-proxy\") pod \"kube-proxy-vbd99\" (UID: \"856b676a-25aa-48b5-ad14-043c61758179\") " pod="kube-system/kube-proxy-vbd99"
	Oct 19 13:17:14 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:14.361251    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a29cf050-9838-4f87-b000-1bc588bc226e-cni-cfg\") pod \"kindnet-m2tx2\" (UID: \"a29cf050-9838-4f87-b000-1bc588bc226e\") " pod="kube-system/kindnet-m2tx2"
	Oct 19 13:17:14 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:14.361360    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m85x5\" (UniqueName: \"kubernetes.io/projected/a29cf050-9838-4f87-b000-1bc588bc226e-kube-api-access-m85x5\") pod \"kindnet-m2tx2\" (UID: \"a29cf050-9838-4f87-b000-1bc588bc226e\") " pod="kube-system/kindnet-m2tx2"
	Oct 19 13:17:14 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:14.361467    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/856b676a-25aa-48b5-ad14-043c61758179-xtables-lock\") pod \"kube-proxy-vbd99\" (UID: \"856b676a-25aa-48b5-ad14-043c61758179\") " pod="kube-system/kube-proxy-vbd99"
	Oct 19 13:17:14 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:14.474497    1295 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 19 13:17:14 default-k8s-diff-port-455348 kubelet[1295]: W1019 13:17:14.613509    1295 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7/crio-31967f25dd8f7e65b4c0d1b62035fadb9b4016140d0207c148df3617d8a4f0ca WatchSource:0}: Error finding container 31967f25dd8f7e65b4c0d1b62035fadb9b4016140d0207c148df3617d8a4f0ca: Status 404 returned error can't find the container with id 31967f25dd8f7e65b4c0d1b62035fadb9b4016140d0207c148df3617d8a4f0ca
	Oct 19 13:17:15 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:15.338343    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vbd99" podStartSLOduration=1.338322868 podStartE2EDuration="1.338322868s" podCreationTimestamp="2025-10-19 13:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:17:15.337617521 +0000 UTC m=+6.447718104" watchObservedRunningTime="2025-10-19 13:17:15.338322868 +0000 UTC m=+6.448423467"
	Oct 19 13:17:19 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:19.201551    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-m2tx2" podStartSLOduration=5.201534912 podStartE2EDuration="5.201534912s" podCreationTimestamp="2025-10-19 13:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:17:15.367468354 +0000 UTC m=+6.477568937" watchObservedRunningTime="2025-10-19 13:17:19.201534912 +0000 UTC m=+10.311635487"
	Oct 19 13:17:55 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:55.358659    1295 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 19 13:17:55 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:55.476882    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff89b\" (UniqueName: \"kubernetes.io/projected/dadf6eac-8768-45de-aea6-a3ca3f518c9d-kube-api-access-ff89b\") pod \"storage-provisioner\" (UID: \"dadf6eac-8768-45de-aea6-a3ca3f518c9d\") " pod="kube-system/storage-provisioner"
	Oct 19 13:17:55 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:55.477174    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec110a63-3a4a-4459-b52f-91f5bbc3040c-config-volume\") pod \"coredns-66bc5c9577-qn68x\" (UID: \"ec110a63-3a4a-4459-b52f-91f5bbc3040c\") " pod="kube-system/coredns-66bc5c9577-qn68x"
	Oct 19 13:17:55 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:55.477227    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dadf6eac-8768-45de-aea6-a3ca3f518c9d-tmp\") pod \"storage-provisioner\" (UID: \"dadf6eac-8768-45de-aea6-a3ca3f518c9d\") " pod="kube-system/storage-provisioner"
	Oct 19 13:17:55 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:55.477251    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nzt4\" (UniqueName: \"kubernetes.io/projected/ec110a63-3a4a-4459-b52f-91f5bbc3040c-kube-api-access-6nzt4\") pod \"coredns-66bc5c9577-qn68x\" (UID: \"ec110a63-3a4a-4459-b52f-91f5bbc3040c\") " pod="kube-system/coredns-66bc5c9577-qn68x"
	Oct 19 13:17:55 default-k8s-diff-port-455348 kubelet[1295]: W1019 13:17:55.713920    1295 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7/crio-af9c121c5c0294576b7fb68f2658aa77e88aaaf8ab4f813212c27475942c243e WatchSource:0}: Error finding container af9c121c5c0294576b7fb68f2658aa77e88aaaf8ab4f813212c27475942c243e: Status 404 returned error can't find the container with id af9c121c5c0294576b7fb68f2658aa77e88aaaf8ab4f813212c27475942c243e
	Oct 19 13:17:55 default-k8s-diff-port-455348 kubelet[1295]: W1019 13:17:55.741395    1295 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7/crio-2ded1bc671b2e28d12340d1ed0b56233e100a81eb3aa7605adcc41aef9a0941f WatchSource:0}: Error finding container 2ded1bc671b2e28d12340d1ed0b56233e100a81eb3aa7605adcc41aef9a0941f: Status 404 returned error can't find the container with id 2ded1bc671b2e28d12340d1ed0b56233e100a81eb3aa7605adcc41aef9a0941f
	Oct 19 13:17:56 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:56.436492    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.436381988 podStartE2EDuration="41.436381988s" podCreationTimestamp="2025-10-19 13:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:17:56.416661187 +0000 UTC m=+47.526761770" watchObservedRunningTime="2025-10-19 13:17:56.436381988 +0000 UTC m=+47.546482563"
	Oct 19 13:17:56 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:56.436948    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qn68x" podStartSLOduration=42.436939132 podStartE2EDuration="42.436939132s" podCreationTimestamp="2025-10-19 13:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:17:56.436702501 +0000 UTC m=+47.546803076" watchObservedRunningTime="2025-10-19 13:17:56.436939132 +0000 UTC m=+47.547039707"
	Oct 19 13:17:58 default-k8s-diff-port-455348 kubelet[1295]: I1019 13:17:58.702690    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t46lt\" (UniqueName: \"kubernetes.io/projected/be6a9614-a438-46fe-8247-1f3e80f868a4-kube-api-access-t46lt\") pod \"busybox\" (UID: \"be6a9614-a438-46fe-8247-1f3e80f868a4\") " pod="default/busybox"
	
	
	==> storage-provisioner [6906481c53bb923bb29c8f77ec1a5bd8f11bf99d292c6798d6aaf9bd4043fdf3] <==
	I1019 13:17:55.803950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 13:17:55.826813       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 13:17:55.827139       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 13:17:55.829635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:17:55.838660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 13:17:55.839088       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 13:17:55.839297       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-455348_9cd856bc-859e-4685-85c0-c9a5e90d879c!
	I1019 13:17:55.844910       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b99ed1ce-9305-43d9-afc4-d6b8159429cd", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-455348_9cd856bc-859e-4685-85c0-c9a5e90d879c became leader
	W1019 13:17:55.847503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:17:55.861988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 13:17:55.941798       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-455348_9cd856bc-859e-4685-85c0-c9a5e90d879c!
	W1019 13:17:57.865831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:17:57.872733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:17:59.876172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:17:59.881161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:01.884599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:01.889056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:03.891869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:03.898994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:05.902242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:05.907331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:07.910200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:07.917425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-455348 -n default-k8s-diff-port-455348
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-455348 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-834340 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-834340 --alsologtostderr -v=1: exit status 80 (2.650015394s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-834340 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 13:18:27.959793  497559 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:18:27.960009  497559 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:18:27.960040  497559 out.go:374] Setting ErrFile to fd 2...
	I1019 13:18:27.960060  497559 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:18:27.960366  497559 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:18:27.960668  497559 out.go:368] Setting JSON to false
	I1019 13:18:27.960727  497559 mustload.go:65] Loading cluster: embed-certs-834340
	I1019 13:18:27.961164  497559 config.go:182] Loaded profile config "embed-certs-834340": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:18:27.961740  497559 cli_runner.go:164] Run: docker container inspect embed-certs-834340 --format={{.State.Status}}
	I1019 13:18:27.985413  497559 host.go:66] Checking if "embed-certs-834340" exists ...
	I1019 13:18:27.985836  497559 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:18:28.116636  497559 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-19 13:18:28.095198502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:18:28.117498  497559 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-834340 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 13:18:28.123663  497559 out.go:179] * Pausing node embed-certs-834340 ... 
	I1019 13:18:28.126645  497559 host.go:66] Checking if "embed-certs-834340" exists ...
	I1019 13:18:28.127128  497559 ssh_runner.go:195] Run: systemctl --version
	I1019 13:18:28.127205  497559 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-834340
	I1019 13:18:28.147601  497559 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/embed-certs-834340/id_rsa Username:docker}
	I1019 13:18:28.265268  497559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:18:28.279968  497559 pause.go:52] kubelet running: true
	I1019 13:18:28.280044  497559 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:18:28.585865  497559 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:18:28.585959  497559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:18:28.668271  497559 cri.go:89] found id: "001f191f75075fcbbccb52988562ecce0820f9a6c12edc5db65687f5b91128b8"
	I1019 13:18:28.668291  497559 cri.go:89] found id: "b8c9fc48127f67fc25c4e79ef9da91ed21a917166d93bd3a182b72817f225588"
	I1019 13:18:28.668296  497559 cri.go:89] found id: "dcd1e089da4e3c88ca65e629976e4d87c834a1278e0da3fa1d073128a1540f9b"
	I1019 13:18:28.668300  497559 cri.go:89] found id: "b855e342325c3ece53dabdea13c7937afcd20c23726eca4569481c9fd68ab9dc"
	I1019 13:18:28.668303  497559 cri.go:89] found id: "31231e1c742bdbc0a3dba61c64b968fd68a7bb9fa8d9ab32f58da69d755f6dcc"
	I1019 13:18:28.668307  497559 cri.go:89] found id: "df2f9b832fba0474917a867bc16694bb71f4c9133c4184692e7e5197a908612c"
	I1019 13:18:28.668310  497559 cri.go:89] found id: "716882266ac3c47cb6251f516b90c4cf3cc2bc032ff7bc8e2159a3543b734128"
	I1019 13:18:28.668313  497559 cri.go:89] found id: "c18df00f28ee52ba5914d4eb54d1df3a03b3eb40ef6d981c61e6b91411a7fcf5"
	I1019 13:18:28.668316  497559 cri.go:89] found id: "039382e4cf978d4d0d233ab6e8648f97661496f0b0c36cdb5fac731f9f4f34fd"
	I1019 13:18:28.668330  497559 cri.go:89] found id: "f2b22a7c199217cfb9c5c6c994f073ef81dd212c2d9eb9450de07cf8ab355502"
	I1019 13:18:28.668334  497559 cri.go:89] found id: "1c4acb28dc65c2cf95e3cd764af3122d1e0110b3c5d4eed9941f8e009ca9688f"
	I1019 13:18:28.668337  497559 cri.go:89] found id: ""
	I1019 13:18:28.668390  497559 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:18:28.680364  497559 retry.go:31] will retry after 317.616384ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:18:28Z" level=error msg="open /run/runc: no such file or directory"
	I1019 13:18:28.998837  497559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:18:29.014795  497559 pause.go:52] kubelet running: false
	I1019 13:18:29.014867  497559 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:18:29.275032  497559 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:18:29.275145  497559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:18:29.380503  497559 cri.go:89] found id: "001f191f75075fcbbccb52988562ecce0820f9a6c12edc5db65687f5b91128b8"
	I1019 13:18:29.380539  497559 cri.go:89] found id: "b8c9fc48127f67fc25c4e79ef9da91ed21a917166d93bd3a182b72817f225588"
	I1019 13:18:29.380545  497559 cri.go:89] found id: "dcd1e089da4e3c88ca65e629976e4d87c834a1278e0da3fa1d073128a1540f9b"
	I1019 13:18:29.380549  497559 cri.go:89] found id: "b855e342325c3ece53dabdea13c7937afcd20c23726eca4569481c9fd68ab9dc"
	I1019 13:18:29.380553  497559 cri.go:89] found id: "31231e1c742bdbc0a3dba61c64b968fd68a7bb9fa8d9ab32f58da69d755f6dcc"
	I1019 13:18:29.380556  497559 cri.go:89] found id: "df2f9b832fba0474917a867bc16694bb71f4c9133c4184692e7e5197a908612c"
	I1019 13:18:29.380559  497559 cri.go:89] found id: "716882266ac3c47cb6251f516b90c4cf3cc2bc032ff7bc8e2159a3543b734128"
	I1019 13:18:29.380563  497559 cri.go:89] found id: "c18df00f28ee52ba5914d4eb54d1df3a03b3eb40ef6d981c61e6b91411a7fcf5"
	I1019 13:18:29.380566  497559 cri.go:89] found id: "039382e4cf978d4d0d233ab6e8648f97661496f0b0c36cdb5fac731f9f4f34fd"
	I1019 13:18:29.380573  497559 cri.go:89] found id: "f2b22a7c199217cfb9c5c6c994f073ef81dd212c2d9eb9450de07cf8ab355502"
	I1019 13:18:29.380578  497559 cri.go:89] found id: "1c4acb28dc65c2cf95e3cd764af3122d1e0110b3c5d4eed9941f8e009ca9688f"
	I1019 13:18:29.380582  497559 cri.go:89] found id: ""
	I1019 13:18:29.380634  497559 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:18:29.391916  497559 retry.go:31] will retry after 532.809223ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:18:29Z" level=error msg="open /run/runc: no such file or directory"
	I1019 13:18:29.925229  497559 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:18:29.946681  497559 pause.go:52] kubelet running: false
	I1019 13:18:29.946740  497559 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:18:30.365320  497559 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:18:30.365391  497559 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:18:30.491396  497559 cri.go:89] found id: "001f191f75075fcbbccb52988562ecce0820f9a6c12edc5db65687f5b91128b8"
	I1019 13:18:30.491414  497559 cri.go:89] found id: "b8c9fc48127f67fc25c4e79ef9da91ed21a917166d93bd3a182b72817f225588"
	I1019 13:18:30.491418  497559 cri.go:89] found id: "dcd1e089da4e3c88ca65e629976e4d87c834a1278e0da3fa1d073128a1540f9b"
	I1019 13:18:30.491422  497559 cri.go:89] found id: "b855e342325c3ece53dabdea13c7937afcd20c23726eca4569481c9fd68ab9dc"
	I1019 13:18:30.491425  497559 cri.go:89] found id: "31231e1c742bdbc0a3dba61c64b968fd68a7bb9fa8d9ab32f58da69d755f6dcc"
	I1019 13:18:30.491428  497559 cri.go:89] found id: "df2f9b832fba0474917a867bc16694bb71f4c9133c4184692e7e5197a908612c"
	I1019 13:18:30.491431  497559 cri.go:89] found id: "716882266ac3c47cb6251f516b90c4cf3cc2bc032ff7bc8e2159a3543b734128"
	I1019 13:18:30.491434  497559 cri.go:89] found id: "c18df00f28ee52ba5914d4eb54d1df3a03b3eb40ef6d981c61e6b91411a7fcf5"
	I1019 13:18:30.491437  497559 cri.go:89] found id: "039382e4cf978d4d0d233ab6e8648f97661496f0b0c36cdb5fac731f9f4f34fd"
	I1019 13:18:30.491443  497559 cri.go:89] found id: "f2b22a7c199217cfb9c5c6c994f073ef81dd212c2d9eb9450de07cf8ab355502"
	I1019 13:18:30.491447  497559 cri.go:89] found id: "1c4acb28dc65c2cf95e3cd764af3122d1e0110b3c5d4eed9941f8e009ca9688f"
	I1019 13:18:30.491450  497559 cri.go:89] found id: ""
	I1019 13:18:30.491505  497559 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:18:30.522245  497559 out.go:203] 
	W1019 13:18:30.525447  497559 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:18:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:18:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 13:18:30.525469  497559 out.go:285] * 
	* 
	W1019 13:18:30.532783  497559 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 13:18:30.536019  497559 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-834340 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-834340
helpers_test.go:243: (dbg) docker inspect embed-certs-834340:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59",
	        "Created": "2025-10-19T13:15:37.885260353Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 493611,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T13:17:21.675656585Z",
	            "FinishedAt": "2025-10-19T13:17:20.815460179Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/hostname",
	        "HostsPath": "/var/lib/docker/containers/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/hosts",
	        "LogPath": "/var/lib/docker/containers/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59-json.log",
	        "Name": "/embed-certs-834340",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-834340:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-834340",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59",
	                "LowerDir": "/var/lib/docker/overlay2/fd9e9f7bbe80ae9f84f50f65044e2fc095d54180303dacdaaf2af69ede890f60-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fd9e9f7bbe80ae9f84f50f65044e2fc095d54180303dacdaaf2af69ede890f60/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fd9e9f7bbe80ae9f84f50f65044e2fc095d54180303dacdaaf2af69ede890f60/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fd9e9f7bbe80ae9f84f50f65044e2fc095d54180303dacdaaf2af69ede890f60/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-834340",
	                "Source": "/var/lib/docker/volumes/embed-certs-834340/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-834340",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-834340",
	                "name.minikube.sigs.k8s.io": "embed-certs-834340",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f9ea60cc0940c33186ba2db94015034d39edbe57374748acf72e3ba5630448e",
	            "SandboxKey": "/var/run/docker/netns/2f9ea60cc094",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-834340": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:2c:25:03:e1:2a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4736119f136360f6c549379b3521579c84fb2cab47b61b166d29a201ac636c1c",
	                    "EndpointID": "463d4ff8c65d9cd7d4c11dfec1b89a5b28f89fd5452677e910c937ba9be8a5f1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-834340",
	                        "9a5cfef083e8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-834340 -n embed-certs-834340
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-834340 -n embed-certs-834340: exit status 2 (576.38578ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-834340 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-834340 logs -n 25: (1.847733152s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p no-preload-108149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ stop    │ -p no-preload-108149 --alsologtostderr -v=3                                                                                                                              │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ addons  │ enable dashboard -p no-preload-108149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ start   │ -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:16 UTC │
	│ image   │ old-k8s-version-842494 image list --format=json                                                                                                                          │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ pause   │ -p old-k8s-version-842494 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ delete  │ -p old-k8s-version-842494                                                                                                                                                │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ delete  │ -p old-k8s-version-842494                                                                                                                                                │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ start   │ -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:16 UTC │
	│ image   │ no-preload-108149 image list --format=json                                                                                                                               │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ pause   │ -p no-preload-108149 --alsologtostderr -v=1                                                                                                                              │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │                     │
	│ delete  │ -p no-preload-108149                                                                                                                                                     │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ delete  │ -p no-preload-108149                                                                                                                                                     │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ delete  │ -p disable-driver-mounts-418719                                                                                                                                          │ disable-driver-mounts-418719 │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ start   │ -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-834340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │                     │
	│ stop    │ -p embed-certs-834340 --alsologtostderr -v=3                                                                                                                             │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-834340 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:17 UTC │
	│ start   │ -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-455348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-455348 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-455348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ start   │ -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │                     │
	│ image   │ embed-certs-834340 image list --format=json                                                                                                                              │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ pause   │ -p embed-certs-834340 --alsologtostderr -v=1                                                                                                                             │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 13:18:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 13:18:21.540162  496573 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:18:21.540367  496573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:18:21.540397  496573 out.go:374] Setting ErrFile to fd 2...
	I1019 13:18:21.540421  496573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:18:21.540698  496573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:18:21.541162  496573 out.go:368] Setting JSON to false
	I1019 13:18:21.542294  496573 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10852,"bootTime":1760869050,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 13:18:21.542406  496573 start.go:141] virtualization:  
	I1019 13:18:21.545586  496573 out.go:179] * [default-k8s-diff-port-455348] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 13:18:21.549490  496573 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:18:21.549673  496573 notify.go:220] Checking for updates...
	I1019 13:18:21.555924  496573 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:18:21.559277  496573 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:18:21.562320  496573 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 13:18:21.565406  496573 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 13:18:21.568364  496573 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:18:21.571937  496573 config.go:182] Loaded profile config "default-k8s-diff-port-455348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:18:21.572714  496573 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:18:21.604700  496573 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 13:18:21.604828  496573 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:18:21.665785  496573 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:18:21.656439448 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:18:21.665899  496573 docker.go:318] overlay module found
	I1019 13:18:21.668993  496573 out.go:179] * Using the docker driver based on existing profile
	I1019 13:18:21.671685  496573 start.go:305] selected driver: docker
	I1019 13:18:21.671704  496573 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-455348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-455348 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:18:21.671811  496573 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:18:21.672546  496573 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:18:21.745013  496573 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:18:21.735243467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:18:21.745355  496573 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:18:21.745404  496573 cni.go:84] Creating CNI manager for ""
	I1019 13:18:21.745467  496573 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:18:21.745513  496573 start.go:349] cluster config:
	{Name:default-k8s-diff-port-455348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-455348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:18:21.751430  496573 out.go:179] * Starting "default-k8s-diff-port-455348" primary control-plane node in "default-k8s-diff-port-455348" cluster
	I1019 13:18:21.754281  496573 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 13:18:21.757257  496573 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 13:18:21.760200  496573 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:18:21.760264  496573 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 13:18:21.760277  496573 cache.go:58] Caching tarball of preloaded images
	I1019 13:18:21.760305  496573 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 13:18:21.760380  496573 preload.go:233] Found /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 13:18:21.760389  496573 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 13:18:21.760498  496573 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/config.json ...
	I1019 13:18:21.786722  496573 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 13:18:21.786742  496573 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 13:18:21.786777  496573 cache.go:232] Successfully downloaded all kic artifacts
	I1019 13:18:21.786801  496573 start.go:360] acquireMachinesLock for default-k8s-diff-port-455348: {Name:mk240c57fae30746abb498299da3308a8a0334da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:18:21.786867  496573 start.go:364] duration metric: took 48.862µs to acquireMachinesLock for "default-k8s-diff-port-455348"
	I1019 13:18:21.786888  496573 start.go:96] Skipping create...Using existing machine configuration
	I1019 13:18:21.786894  496573 fix.go:54] fixHost starting: 
	I1019 13:18:21.787147  496573 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-455348 --format={{.State.Status}}
	I1019 13:18:21.804524  496573 fix.go:112] recreateIfNeeded on default-k8s-diff-port-455348: state=Stopped err=<nil>
	W1019 13:18:21.804552  496573 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 13:18:21.807900  496573 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-455348" ...
	I1019 13:18:21.807992  496573 cli_runner.go:164] Run: docker start default-k8s-diff-port-455348
	I1019 13:18:22.059879  496573 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-455348 --format={{.State.Status}}
	I1019 13:18:22.091504  496573 kic.go:430] container "default-k8s-diff-port-455348" state is running.
	I1019 13:18:22.092034  496573 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-455348
	I1019 13:18:22.116803  496573 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/config.json ...
	I1019 13:18:22.117259  496573 machine.go:93] provisionDockerMachine start ...
	I1019 13:18:22.117336  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:22.152887  496573 main.go:141] libmachine: Using SSH client type: native
	I1019 13:18:22.153259  496573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1019 13:18:22.153269  496573 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 13:18:22.153985  496573 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35922->127.0.0.1:33453: read: connection reset by peer
	I1019 13:18:25.301379  496573 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-455348
	
	I1019 13:18:25.301411  496573 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-455348"
	I1019 13:18:25.301475  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:25.320007  496573 main.go:141] libmachine: Using SSH client type: native
	I1019 13:18:25.320900  496573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1019 13:18:25.320918  496573 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-455348 && echo "default-k8s-diff-port-455348" | sudo tee /etc/hostname
	I1019 13:18:25.481093  496573 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-455348
	
	I1019 13:18:25.481179  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:25.499094  496573 main.go:141] libmachine: Using SSH client type: native
	I1019 13:18:25.499410  496573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1019 13:18:25.499437  496573 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-455348' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-455348/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-455348' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 13:18:25.645895  496573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 13:18:25.645979  496573 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 13:18:25.646038  496573 ubuntu.go:190] setting up certificates
	I1019 13:18:25.646073  496573 provision.go:84] configureAuth start
	I1019 13:18:25.646172  496573 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-455348
	I1019 13:18:25.662741  496573 provision.go:143] copyHostCerts
	I1019 13:18:25.662812  496573 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem, removing ...
	I1019 13:18:25.662833  496573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem
	I1019 13:18:25.662918  496573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 13:18:25.663030  496573 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem, removing ...
	I1019 13:18:25.663042  496573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem
	I1019 13:18:25.663070  496573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 13:18:25.663144  496573 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem, removing ...
	I1019 13:18:25.663153  496573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem
	I1019 13:18:25.663178  496573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 13:18:25.663243  496573 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-455348 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-455348 localhost minikube]
	I1019 13:18:25.976151  496573 provision.go:177] copyRemoteCerts
	I1019 13:18:25.976231  496573 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 13:18:25.976272  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:25.995607  496573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:18:26.113383  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 13:18:26.131481  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1019 13:18:26.150518  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 13:18:26.168626  496573 provision.go:87] duration metric: took 522.513446ms to configureAuth
	I1019 13:18:26.168656  496573 ubuntu.go:206] setting minikube options for container-runtime
	I1019 13:18:26.168857  496573 config.go:182] Loaded profile config "default-k8s-diff-port-455348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:18:26.168969  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:26.186727  496573 main.go:141] libmachine: Using SSH client type: native
	I1019 13:18:26.187047  496573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1019 13:18:26.187073  496573 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 13:18:26.502325  496573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 13:18:26.502353  496573 machine.go:96] duration metric: took 4.385081913s to provisionDockerMachine
	I1019 13:18:26.502364  496573 start.go:293] postStartSetup for "default-k8s-diff-port-455348" (driver="docker")
	I1019 13:18:26.502375  496573 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 13:18:26.502442  496573 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 13:18:26.502484  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:26.523004  496573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:18:26.629883  496573 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 13:18:26.633291  496573 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 13:18:26.633318  496573 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 13:18:26.633330  496573 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/addons for local assets ...
	I1019 13:18:26.633381  496573 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/files for local assets ...
	I1019 13:18:26.633462  496573 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem -> 2945182.pem in /etc/ssl/certs
	I1019 13:18:26.633564  496573 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 13:18:26.641241  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:18:26.659234  496573 start.go:296] duration metric: took 156.854739ms for postStartSetup
	I1019 13:18:26.659315  496573 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 13:18:26.659386  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:26.677727  496573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:18:26.780612  496573 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 13:18:26.785941  496573 fix.go:56] duration metric: took 4.999040058s for fixHost
	I1019 13:18:26.785967  496573 start.go:83] releasing machines lock for "default-k8s-diff-port-455348", held for 4.999090815s
	I1019 13:18:26.786044  496573 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-455348
	I1019 13:18:26.802566  496573 ssh_runner.go:195] Run: cat /version.json
	I1019 13:18:26.802621  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:26.802876  496573 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 13:18:26.802937  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:26.825145  496573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:18:26.835235  496573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:18:26.925402  496573 ssh_runner.go:195] Run: systemctl --version
	I1019 13:18:27.022488  496573 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 13:18:27.085158  496573 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 13:18:27.093913  496573 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 13:18:27.094013  496573 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 13:18:27.103737  496573 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 13:18:27.103788  496573 start.go:495] detecting cgroup driver to use...
	I1019 13:18:27.103870  496573 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 13:18:27.103943  496573 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 13:18:27.133073  496573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 13:18:27.147883  496573 docker.go:218] disabling cri-docker service (if available) ...
	I1019 13:18:27.147987  496573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 13:18:27.164439  496573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 13:18:27.178049  496573 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 13:18:27.318021  496573 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 13:18:27.458546  496573 docker.go:234] disabling docker service ...
	I1019 13:18:27.458691  496573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 13:18:27.476487  496573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 13:18:27.490783  496573 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 13:18:27.665360  496573 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 13:18:27.846822  496573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 13:18:27.863159  496573 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 13:18:27.889305  496573 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 13:18:27.889368  496573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:27.909962  496573 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 13:18:27.910030  496573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:27.920638  496573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:27.933051  496573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:27.945761  496573 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 13:18:27.954172  496573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:27.967798  496573 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:27.977139  496573 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:27.988498  496573 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 13:18:27.997550  496573 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 13:18:28.007935  496573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:18:28.205248  496573 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 13:18:28.348676  496573 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 13:18:28.348754  496573 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 13:18:28.353007  496573 start.go:563] Will wait 60s for crictl version
	I1019 13:18:28.353068  496573 ssh_runner.go:195] Run: which crictl
	I1019 13:18:28.360341  496573 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 13:18:28.421076  496573 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 13:18:28.421188  496573 ssh_runner.go:195] Run: crio --version
	I1019 13:18:28.460680  496573 ssh_runner.go:195] Run: crio --version
	I1019 13:18:28.503821  496573 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 13:18:28.506915  496573 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-455348 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:18:28.530375  496573 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 13:18:28.537035  496573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:18:28.547802  496573 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-455348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-455348 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 13:18:28.547921  496573 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:18:28.547988  496573 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:18:28.586475  496573 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:18:28.586494  496573 crio.go:433] Images already preloaded, skipping extraction
	I1019 13:18:28.586538  496573 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:18:28.627103  496573 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:18:28.627125  496573 cache_images.go:85] Images are preloaded, skipping loading
	I1019 13:18:28.627133  496573 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1019 13:18:28.627263  496573 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-455348 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-455348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 13:18:28.627353  496573 ssh_runner.go:195] Run: crio config
	I1019 13:18:28.704851  496573 cni.go:84] Creating CNI manager for ""
	I1019 13:18:28.704877  496573 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:18:28.704898  496573 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 13:18:28.704925  496573 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-455348 NodeName:default-k8s-diff-port-455348 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 13:18:28.705062  496573 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-455348"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 13:18:28.705135  496573 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 13:18:28.713212  496573 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 13:18:28.713286  496573 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 13:18:28.720726  496573 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1019 13:18:28.734174  496573 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 13:18:28.748809  496573 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1019 13:18:28.762503  496573 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 13:18:28.766312  496573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:18:28.776381  496573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:18:28.897873  496573 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:18:28.916528  496573 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348 for IP: 192.168.76.2
	I1019 13:18:28.916550  496573 certs.go:195] generating shared ca certs ...
	I1019 13:18:28.916567  496573 certs.go:227] acquiring lock for ca certs: {Name:mk8f2f1c683cf5104ef70f6f3d59bf8f6240d633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:18:28.916741  496573 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key
	I1019 13:18:28.916799  496573 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key
	I1019 13:18:28.916821  496573 certs.go:257] generating profile certs ...
	I1019 13:18:28.916927  496573 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.key
	I1019 13:18:28.917014  496573 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/apiserver.key.223e319e
	I1019 13:18:28.917065  496573 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/proxy-client.key
	I1019 13:18:28.917190  496573 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem (1338 bytes)
	W1019 13:18:28.917237  496573 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518_empty.pem, impossibly tiny 0 bytes
	I1019 13:18:28.917250  496573 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 13:18:28.917282  496573 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem (1082 bytes)
	I1019 13:18:28.917310  496573 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem (1123 bytes)
	I1019 13:18:28.917335  496573 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem (1679 bytes)
	I1019 13:18:28.917391  496573 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:18:28.918149  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 13:18:28.936531  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 13:18:28.954178  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 13:18:28.971416  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 13:18:28.989202  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1019 13:18:29.029614  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 13:18:29.049013  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 13:18:29.079525  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 13:18:29.139527  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem --> /usr/share/ca-certificates/294518.pem (1338 bytes)
	I1019 13:18:29.183461  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /usr/share/ca-certificates/2945182.pem (1708 bytes)
	I1019 13:18:29.234775  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 13:18:29.256094  496573 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 13:18:29.271752  496573 ssh_runner.go:195] Run: openssl version
	I1019 13:18:29.279834  496573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294518.pem && ln -fs /usr/share/ca-certificates/294518.pem /etc/ssl/certs/294518.pem"
	I1019 13:18:29.289035  496573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294518.pem
	I1019 13:18:29.293486  496573 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:20 /usr/share/ca-certificates/294518.pem
	I1019 13:18:29.293605  496573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294518.pem
	I1019 13:18:29.346669  496573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294518.pem /etc/ssl/certs/51391683.0"
	I1019 13:18:29.355542  496573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2945182.pem && ln -fs /usr/share/ca-certificates/2945182.pem /etc/ssl/certs/2945182.pem"
	I1019 13:18:29.364470  496573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2945182.pem
	I1019 13:18:29.368960  496573 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:20 /usr/share/ca-certificates/2945182.pem
	I1019 13:18:29.369074  496573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2945182.pem
	I1019 13:18:29.413391  496573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2945182.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 13:18:29.422322  496573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 13:18:29.430738  496573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:18:29.434819  496573 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:18:29.434925  496573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:18:29.478327  496573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 13:18:29.486540  496573 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 13:18:29.490505  496573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 13:18:29.532956  496573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 13:18:29.577751  496573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 13:18:29.619524  496573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 13:18:29.668087  496573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 13:18:29.721112  496573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 13:18:29.766001  496573 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-455348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-455348 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:18:29.766137  496573 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 13:18:29.766241  496573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 13:18:29.861392  496573 cri.go:89] found id: "9dc424071c1b92771542bfccd38e435461e8182ac00adb300909438d1cbf9b8f"
	I1019 13:18:29.861416  496573 cri.go:89] found id: "b34e96695557c6959cce715a57b32eef60a662626ab95fd5b08a3505f2cfe53a"
	I1019 13:18:29.861431  496573 cri.go:89] found id: "e5b09162fcaf4578399f5a03831d7d61cf4bfd1901478ea7fed991f19b9f174e"
	I1019 13:18:29.861436  496573 cri.go:89] found id: ""
	I1019 13:18:29.861527  496573 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 13:18:29.893771  496573 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:18:29Z" level=error msg="open /run/runc: no such file or directory"
	I1019 13:18:29.893899  496573 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 13:18:29.913836  496573 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 13:18:29.913858  496573 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 13:18:29.913940  496573 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 13:18:29.929442  496573 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 13:18:29.930356  496573 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-455348" does not appear in /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:18:29.930973  496573 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-292654/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-455348" cluster setting kubeconfig missing "default-k8s-diff-port-455348" context setting]
	I1019 13:18:29.931970  496573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:18:29.934201  496573 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 13:18:29.963606  496573 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1019 13:18:29.963692  496573 kubeadm.go:601] duration metric: took 49.826861ms to restartPrimaryControlPlane
	I1019 13:18:29.963716  496573 kubeadm.go:402] duration metric: took 197.723226ms to StartCluster
	I1019 13:18:29.963750  496573 settings.go:142] acquiring lock: {Name:mk1099ab6cbf86eca031b5f8e2b43952c9c0f84f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:18:29.963832  496573 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:18:29.965374  496573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:18:29.965806  496573 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:18:29.966248  496573 config.go:182] Loaded profile config "default-k8s-diff-port-455348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:18:29.966222  496573 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 13:18:29.966399  496573 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-455348"
	I1019 13:18:29.966446  496573 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-455348"
	W1019 13:18:29.966466  496573 addons.go:247] addon storage-provisioner should already be in state true
	I1019 13:18:29.966501  496573 host.go:66] Checking if "default-k8s-diff-port-455348" exists ...
	I1019 13:18:29.967103  496573 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-455348 --format={{.State.Status}}
	I1019 13:18:29.967289  496573 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-455348"
	I1019 13:18:29.967333  496573 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-455348"
	W1019 13:18:29.967353  496573 addons.go:247] addon dashboard should already be in state true
	I1019 13:18:29.967409  496573 host.go:66] Checking if "default-k8s-diff-port-455348" exists ...
	I1019 13:18:29.967624  496573 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-455348"
	I1019 13:18:29.967639  496573 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-455348"
	I1019 13:18:29.967867  496573 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-455348 --format={{.State.Status}}
	I1019 13:18:29.968338  496573 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-455348 --format={{.State.Status}}
	I1019 13:18:29.972667  496573 out.go:179] * Verifying Kubernetes components...
	I1019 13:18:29.979311  496573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:18:30.034501  496573 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 13:18:30.039105  496573 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:18:30.039129  496573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 13:18:30.039199  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:30.039443  496573 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-455348"
	W1019 13:18:30.039458  496573 addons.go:247] addon default-storageclass should already be in state true
	I1019 13:18:30.039486  496573 host.go:66] Checking if "default-k8s-diff-port-455348" exists ...
	I1019 13:18:30.039915  496573 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-455348 --format={{.State.Status}}
	I1019 13:18:30.058246  496573 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 13:18:30.061853  496573 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.813115183Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.816311297Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.816345701Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.816366822Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.819438527Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.819475992Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.819497974Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.822752779Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.822788374Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.822811094Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.825867751Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.825933483Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.048864531Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=41b3bfc6-9939-464d-bcc2-750ac0e08129 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.049715053Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d588a416-0757-4213-ad11-938729c2db21 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.050749816Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jkcd/dashboard-metrics-scraper" id=a194e46a-1843-4bd4-91cd-634dbbebb2d4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.050998952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.05838818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.059033252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.104275074Z" level=info msg="Created container f2b22a7c199217cfb9c5c6c994f073ef81dd212c2d9eb9450de07cf8ab355502: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jkcd/dashboard-metrics-scraper" id=a194e46a-1843-4bd4-91cd-634dbbebb2d4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.110103469Z" level=info msg="Starting container: f2b22a7c199217cfb9c5c6c994f073ef81dd212c2d9eb9450de07cf8ab355502" id=c209a67b-9848-46cb-b6ba-0071a897075a name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.11215834Z" level=info msg="Started container" PID=1734 containerID=f2b22a7c199217cfb9c5c6c994f073ef81dd212c2d9eb9450de07cf8ab355502 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jkcd/dashboard-metrics-scraper id=c209a67b-9848-46cb-b6ba-0071a897075a name=/runtime.v1.RuntimeService/StartContainer sandboxID=56400a7f42af70fef47bfdecec34fa2d48b5ae7e0a39ba11b4050522612868c8
	Oct 19 13:18:27 embed-certs-834340 conmon[1730]: conmon f2b22a7c199217cfb9c5 <ninfo>: container 1734 exited with status 1
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.337106947Z" level=info msg="Removing container: 4c74d8cafc8ba306ec13a47045375ad7bda67a0b6040b076c92502e90a2fb40e" id=a9410066-8983-4630-b81f-04cbfea6b8d6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.347763248Z" level=info msg="Error loading conmon cgroup of container 4c74d8cafc8ba306ec13a47045375ad7bda67a0b6040b076c92502e90a2fb40e: cgroup deleted" id=a9410066-8983-4630-b81f-04cbfea6b8d6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.373801239Z" level=info msg="Removed container 4c74d8cafc8ba306ec13a47045375ad7bda67a0b6040b076c92502e90a2fb40e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jkcd/dashboard-metrics-scraper" id=a9410066-8983-4630-b81f-04cbfea6b8d6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f2b22a7c19921       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago        Exited              dashboard-metrics-scraper   3                   56400a7f42af7       dashboard-metrics-scraper-6ffb444bf9-9jkcd   kubernetes-dashboard
	001f191f75075       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   67b876e70c68d       storage-provisioner                          kube-system
	1c4acb28dc65c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago       Running             kubernetes-dashboard        0                   d29b3670f80ae       kubernetes-dashboard-855c9754f9-m9x8r        kubernetes-dashboard
	b8c9fc48127f6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   cb24a785b353a       coredns-66bc5c9577-sgj8p                     kube-system
	7aa060bc9ee4f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   91f2df5cd79a7       busybox                                      default
	dcd1e089da4e3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   1f2e1c6c33265       kube-proxy-2skj7                             kube-system
	b855e342325c3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   481e1e76112b0       kindnet-cbzm8                                kube-system
	31231e1c742bd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   67b876e70c68d       storage-provisioner                          kube-system
	df2f9b832fba0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   9f04ef29e043e       etcd-embed-certs-834340                      kube-system
	716882266ac3c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   7005f8628621d       kube-scheduler-embed-certs-834340            kube-system
	c18df00f28ee5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   93dc6de41d205       kube-apiserver-embed-certs-834340            kube-system
	039382e4cf978       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   7326fc743321b       kube-controller-manager-embed-certs-834340   kube-system
	
	
	==> coredns [b8c9fc48127f67fc25c4e79ef9da91ed21a917166d93bd3a182b72817f225588] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43480 - 17811 "HINFO IN 5154733616253157874.8180394126122793850. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013301227s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-834340
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-834340
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=embed-certs-834340
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T13_16_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 13:16:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-834340
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 13:18:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 13:18:06 +0000   Sun, 19 Oct 2025 13:15:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 13:18:06 +0000   Sun, 19 Oct 2025 13:15:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 13:18:06 +0000   Sun, 19 Oct 2025 13:15:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 13:18:06 +0000   Sun, 19 Oct 2025 13:16:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-834340
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                89f6ba5e-d968-48de-b86a-37b91a3521e1
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-sgj8p                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-embed-certs-834340                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m26s
	  kube-system                 kindnet-cbzm8                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-embed-certs-834340             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-controller-manager-embed-certs-834340    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-2skj7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-embed-certs-834340             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-9jkcd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-m9x8r         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m19s                  kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Normal   Starting                 2m34s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m34s (x8 over 2m34s)  kubelet          Node embed-certs-834340 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m34s (x8 over 2m34s)  kubelet          Node embed-certs-834340 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m34s (x8 over 2m34s)  kubelet          Node embed-certs-834340 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m26s                  kubelet          Node embed-certs-834340 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m26s                  kubelet          Node embed-certs-834340 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m26s                  kubelet          Node embed-certs-834340 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m26s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m21s                  node-controller  Node embed-certs-834340 event: Registered Node embed-certs-834340 in Controller
	  Normal   NodeReady                99s                    kubelet          Node embed-certs-834340 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node embed-certs-834340 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node embed-certs-834340 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node embed-certs-834340 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node embed-certs-834340 event: Registered Node embed-certs-834340 in Controller
	
	
	==> dmesg <==
	[Oct19 12:56] overlayfs: idmapped layers are currently not supported
	[ +16.315179] overlayfs: idmapped layers are currently not supported
	[ +11.914063] overlayfs: idmapped layers are currently not supported
	[Oct19 12:57] overlayfs: idmapped layers are currently not supported
	[Oct19 12:58] overlayfs: idmapped layers are currently not supported
	[ +48.481184] overlayfs: idmapped layers are currently not supported
	[Oct19 12:59] overlayfs: idmapped layers are currently not supported
	[Oct19 13:00] overlayfs: idmapped layers are currently not supported
	[Oct19 13:01] overlayfs: idmapped layers are currently not supported
	[Oct19 13:04] overlayfs: idmapped layers are currently not supported
	[Oct19 13:05] overlayfs: idmapped layers are currently not supported
	[Oct19 13:06] overlayfs: idmapped layers are currently not supported
	[Oct19 13:08] overlayfs: idmapped layers are currently not supported
	[ +38.759554] overlayfs: idmapped layers are currently not supported
	[Oct19 13:10] overlayfs: idmapped layers are currently not supported
	[Oct19 13:11] overlayfs: idmapped layers are currently not supported
	[Oct19 13:12] overlayfs: idmapped layers are currently not supported
	[ +39.991818] overlayfs: idmapped layers are currently not supported
	[Oct19 13:13] overlayfs: idmapped layers are currently not supported
	[Oct19 13:14] overlayfs: idmapped layers are currently not supported
	[Oct19 13:15] overlayfs: idmapped layers are currently not supported
	[ +34.413925] overlayfs: idmapped layers are currently not supported
	[Oct19 13:17] overlayfs: idmapped layers are currently not supported
	[ +27.716246] overlayfs: idmapped layers are currently not supported
	[Oct19 13:18] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [df2f9b832fba0474917a867bc16694bb71f4c9133c4184692e7e5197a908612c] <==
	{"level":"warn","ts":"2025-10-19T13:17:34.314610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.336104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.356759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.371152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.396413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.418087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.448765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.465516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.488141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.492067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.515670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.525171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.565917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.575191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.585919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.612986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.630504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.640405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.656055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.677498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.691099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.742336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.762657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.778697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.866054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58510","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:18:32 up  3:01,  0 user,  load average: 2.84, 3.21, 2.81
	Linux embed-certs-834340 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b855e342325c3ece53dabdea13c7937afcd20c23726eca4569481c9fd68ab9dc] <==
	I1019 13:17:36.624716       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:17:36.624939       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 13:17:36.625071       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:17:36.625082       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:17:36.625091       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:17:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:17:36.812115       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:17:36.812159       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:17:36.812172       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:17:36.813074       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 13:18:06.809306       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 13:18:06.812836       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1019 13:18:06.812948       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 13:18:06.813029       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1019 13:18:08.012964       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 13:18:08.012999       1 metrics.go:72] Registering metrics
	I1019 13:18:08.013063       1 controller.go:711] "Syncing nftables rules"
	I1019 13:18:16.809009       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 13:18:16.809061       1 main.go:301] handling current node
	I1019 13:18:26.818015       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 13:18:26.818131       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c18df00f28ee52ba5914d4eb54d1df3a03b3eb40ef6d981c61e6b91411a7fcf5] <==
	I1019 13:17:35.722451       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 13:17:35.722502       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 13:17:35.722692       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 13:17:35.722746       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 13:17:35.729260       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 13:17:35.729293       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 13:17:35.729447       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 13:17:35.742182       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 13:17:35.742208       1 policy_source.go:240] refreshing policies
	I1019 13:17:35.742384       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:17:35.745481       1 cache.go:39] Caches are synced for autoregister controller
	I1019 13:17:35.745492       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1019 13:17:35.775513       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 13:17:35.803057       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 13:17:35.978960       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 13:17:36.427356       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 13:17:36.636835       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 13:17:36.788327       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 13:17:36.836822       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 13:17:36.856373       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 13:17:36.970819       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.0.201"}
	I1019 13:17:37.047654       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.222.169"}
	I1019 13:17:39.399531       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 13:17:39.570375       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 13:17:39.616931       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [039382e4cf978d4d0d233ab6e8648f97661496f0b0c36cdb5fac731f9f4f34fd] <==
	I1019 13:17:38.991402       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 13:17:38.991531       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 13:17:38.992658       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 13:17:38.992727       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 13:17:38.992744       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 13:17:38.992768       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 13:17:38.997873       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 13:17:38.999152       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:17:39.005465       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 13:17:39.007809       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 13:17:39.012275       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 13:17:39.015449       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 13:17:39.017740       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 13:17:39.017903       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 13:17:39.018010       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-834340"
	I1019 13:17:39.018078       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 13:17:39.025302       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 13:17:39.025543       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 13:17:39.030480       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 13:17:39.035901       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 13:17:39.038179       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 13:17:39.041860       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:17:39.041881       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 13:17:39.041889       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 13:17:39.068410       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [dcd1e089da4e3c88ca65e629976e4d87c834a1278e0da3fa1d073128a1540f9b] <==
	I1019 13:17:36.690534       1 server_linux.go:53] "Using iptables proxy"
	I1019 13:17:36.908345       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 13:17:37.030243       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 13:17:37.030282       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 13:17:37.030357       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 13:17:37.122591       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:17:37.122646       1 server_linux.go:132] "Using iptables Proxier"
	I1019 13:17:37.141471       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 13:17:37.142050       1 server.go:527] "Version info" version="v1.34.1"
	I1019 13:17:37.142079       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:17:37.146827       1 config.go:200] "Starting service config controller"
	I1019 13:17:37.146913       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 13:17:37.147009       1 config.go:309] "Starting node config controller"
	I1019 13:17:37.147045       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 13:17:37.147074       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 13:17:37.147244       1 config.go:106] "Starting endpoint slice config controller"
	I1019 13:17:37.147264       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 13:17:37.147280       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 13:17:37.147285       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 13:17:37.247772       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 13:17:37.247780       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 13:17:37.247815       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [716882266ac3c47cb6251f516b90c4cf3cc2bc032ff7bc8e2159a3543b734128] <==
	I1019 13:17:33.347394       1 serving.go:386] Generated self-signed cert in-memory
	W1019 13:17:35.498066       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 13:17:35.498157       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 13:17:35.498192       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 13:17:35.498239       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 13:17:35.614937       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 13:17:35.614971       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:17:35.627567       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 13:17:35.627811       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:17:35.629755       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:17:35.629885       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1019 13:17:35.650193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1019 13:17:35.678541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 13:17:35.678618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 13:17:35.697277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 13:17:35.697416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1019 13:17:36.633792       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 13:17:40 embed-certs-834340 kubelet[779]: E1019 13:17:40.712642     779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/86d791c0-5ed1-48b8-acec-70e583fc2449-kube-api-access-pjzp6 podName:86d791c0-5ed1-48b8-acec-70e583fc2449 nodeName:}" failed. No retries permitted until 2025-10-19 13:17:41.212626015 +0000 UTC m=+12.388168355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pjzp6" (UniqueName: "kubernetes.io/projected/86d791c0-5ed1-48b8-acec-70e583fc2449-kube-api-access-pjzp6") pod "kubernetes-dashboard-855c9754f9-m9x8r" (UID: "86d791c0-5ed1-48b8-acec-70e583fc2449") : failed to sync configmap cache: timed out waiting for the condition
	Oct 19 13:17:41 embed-certs-834340 kubelet[779]: W1019 13:17:41.393310     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/crio-d29b3670f80aea16a1b3d73ca6c8ad2026e43fa3e3c7ef5d758afe709c42ef5c WatchSource:0}: Error finding container d29b3670f80aea16a1b3d73ca6c8ad2026e43fa3e3c7ef5d758afe709c42ef5c: Status 404 returned error can't find the container with id d29b3670f80aea16a1b3d73ca6c8ad2026e43fa3e3c7ef5d758afe709c42ef5c
	Oct 19 13:17:41 embed-certs-834340 kubelet[779]: W1019 13:17:41.402453     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/crio-56400a7f42af70fef47bfdecec34fa2d48b5ae7e0a39ba11b4050522612868c8 WatchSource:0}: Error finding container 56400a7f42af70fef47bfdecec34fa2d48b5ae7e0a39ba11b4050522612868c8: Status 404 returned error can't find the container with id 56400a7f42af70fef47bfdecec34fa2d48b5ae7e0a39ba11b4050522612868c8
	Oct 19 13:17:44 embed-certs-834340 kubelet[779]: I1019 13:17:44.223853     779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 19 13:17:47 embed-certs-834340 kubelet[779]: I1019 13:17:47.223529     779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m9x8r" podStartSLOduration=3.419142011 podStartE2EDuration="8.223514s" podCreationTimestamp="2025-10-19 13:17:39 +0000 UTC" firstStartedPulling="2025-10-19 13:17:41.396044933 +0000 UTC m=+12.571587232" lastFinishedPulling="2025-10-19 13:17:46.200416914 +0000 UTC m=+17.375959221" observedRunningTime="2025-10-19 13:17:47.223165384 +0000 UTC m=+18.398707691" watchObservedRunningTime="2025-10-19 13:17:47.223514 +0000 UTC m=+18.399056298"
	Oct 19 13:17:51 embed-certs-834340 kubelet[779]: I1019 13:17:51.217130     779 scope.go:117] "RemoveContainer" containerID="6359157b01599f4cee7dfe5237f34018dc4918b0418707f6357f49ca811eefcc"
	Oct 19 13:17:52 embed-certs-834340 kubelet[779]: I1019 13:17:52.221663     779 scope.go:117] "RemoveContainer" containerID="6359157b01599f4cee7dfe5237f34018dc4918b0418707f6357f49ca811eefcc"
	Oct 19 13:17:52 embed-certs-834340 kubelet[779]: I1019 13:17:52.222002     779 scope.go:117] "RemoveContainer" containerID="3cf4ba1462823f3fbca92b6d5cc30f2d515b2efbad2de067c41050e451024638"
	Oct 19 13:17:52 embed-certs-834340 kubelet[779]: E1019 13:17:52.222157     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jkcd_kubernetes-dashboard(15b5ab5d-1dc4-4250-afe7-f70dda24b9a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jkcd" podUID="15b5ab5d-1dc4-4250-afe7-f70dda24b9a6"
	Oct 19 13:17:53 embed-certs-834340 kubelet[779]: I1019 13:17:53.224893     779 scope.go:117] "RemoveContainer" containerID="3cf4ba1462823f3fbca92b6d5cc30f2d515b2efbad2de067c41050e451024638"
	Oct 19 13:17:53 embed-certs-834340 kubelet[779]: E1019 13:17:53.225045     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jkcd_kubernetes-dashboard(15b5ab5d-1dc4-4250-afe7-f70dda24b9a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jkcd" podUID="15b5ab5d-1dc4-4250-afe7-f70dda24b9a6"
	Oct 19 13:18:01 embed-certs-834340 kubelet[779]: I1019 13:18:01.359580     779 scope.go:117] "RemoveContainer" containerID="3cf4ba1462823f3fbca92b6d5cc30f2d515b2efbad2de067c41050e451024638"
	Oct 19 13:18:02 embed-certs-834340 kubelet[779]: I1019 13:18:02.259603     779 scope.go:117] "RemoveContainer" containerID="3cf4ba1462823f3fbca92b6d5cc30f2d515b2efbad2de067c41050e451024638"
	Oct 19 13:18:02 embed-certs-834340 kubelet[779]: I1019 13:18:02.259894     779 scope.go:117] "RemoveContainer" containerID="4c74d8cafc8ba306ec13a47045375ad7bda67a0b6040b076c92502e90a2fb40e"
	Oct 19 13:18:02 embed-certs-834340 kubelet[779]: E1019 13:18:02.260243     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jkcd_kubernetes-dashboard(15b5ab5d-1dc4-4250-afe7-f70dda24b9a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jkcd" podUID="15b5ab5d-1dc4-4250-afe7-f70dda24b9a6"
	Oct 19 13:18:07 embed-certs-834340 kubelet[779]: I1019 13:18:07.276404     779 scope.go:117] "RemoveContainer" containerID="31231e1c742bdbc0a3dba61c64b968fd68a7bb9fa8d9ab32f58da69d755f6dcc"
	Oct 19 13:18:11 embed-certs-834340 kubelet[779]: I1019 13:18:11.359155     779 scope.go:117] "RemoveContainer" containerID="4c74d8cafc8ba306ec13a47045375ad7bda67a0b6040b076c92502e90a2fb40e"
	Oct 19 13:18:11 embed-certs-834340 kubelet[779]: E1019 13:18:11.359790     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jkcd_kubernetes-dashboard(15b5ab5d-1dc4-4250-afe7-f70dda24b9a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jkcd" podUID="15b5ab5d-1dc4-4250-afe7-f70dda24b9a6"
	Oct 19 13:18:27 embed-certs-834340 kubelet[779]: I1019 13:18:27.048422     779 scope.go:117] "RemoveContainer" containerID="4c74d8cafc8ba306ec13a47045375ad7bda67a0b6040b076c92502e90a2fb40e"
	Oct 19 13:18:27 embed-certs-834340 kubelet[779]: I1019 13:18:27.327458     779 scope.go:117] "RemoveContainer" containerID="4c74d8cafc8ba306ec13a47045375ad7bda67a0b6040b076c92502e90a2fb40e"
	Oct 19 13:18:27 embed-certs-834340 kubelet[779]: I1019 13:18:27.327828     779 scope.go:117] "RemoveContainer" containerID="f2b22a7c199217cfb9c5c6c994f073ef81dd212c2d9eb9450de07cf8ab355502"
	Oct 19 13:18:27 embed-certs-834340 kubelet[779]: E1019 13:18:27.330943     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jkcd_kubernetes-dashboard(15b5ab5d-1dc4-4250-afe7-f70dda24b9a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jkcd" podUID="15b5ab5d-1dc4-4250-afe7-f70dda24b9a6"
	Oct 19 13:18:28 embed-certs-834340 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 13:18:28 embed-certs-834340 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 13:18:28 embed-certs-834340 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [1c4acb28dc65c2cf95e3cd764af3122d1e0110b3c5d4eed9941f8e009ca9688f] <==
	2025/10/19 13:17:46 Using namespace: kubernetes-dashboard
	2025/10/19 13:17:46 Using in-cluster config to connect to apiserver
	2025/10/19 13:17:46 Using secret token for csrf signing
	2025/10/19 13:17:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 13:17:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 13:17:46 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 13:17:46 Generating JWE encryption key
	2025/10/19 13:17:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 13:17:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 13:17:46 Initializing JWE encryption key from synchronized object
	2025/10/19 13:17:46 Creating in-cluster Sidecar client
	2025/10/19 13:17:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 13:17:46 Serving insecurely on HTTP port: 9090
	2025/10/19 13:18:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 13:17:46 Starting overwatch
	
	
	==> storage-provisioner [001f191f75075fcbbccb52988562ecce0820f9a6c12edc5db65687f5b91128b8] <==
	I1019 13:18:07.341396       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 13:18:07.355056       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 13:18:07.355102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 13:18:07.357743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:10.813327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:15.073577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:18.672441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:21.726944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:24.749055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:24.756460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 13:18:24.756678       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 13:18:24.759336       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-834340_3c67bc1e-c87b-473a-a5b8-8dfee52678ff!
	I1019 13:18:24.757090       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f1d54810-d394-48c2-ac3f-ee098575b9a6", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-834340_3c67bc1e-c87b-473a-a5b8-8dfee52678ff became leader
	W1019 13:18:24.761651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:24.771011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 13:18:24.859755       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-834340_3c67bc1e-c87b-473a-a5b8-8dfee52678ff!
	W1019 13:18:26.775649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:26.785003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:28.789654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:28.795023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:30.813900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:30.822030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:32.827273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:32.850335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [31231e1c742bdbc0a3dba61c64b968fd68a7bb9fa8d9ab32f58da69d755f6dcc] <==
	I1019 13:17:36.676820       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 13:18:06.679397       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-834340 -n embed-certs-834340
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-834340 -n embed-certs-834340: exit status 2 (467.309703ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-834340 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-834340
helpers_test.go:243: (dbg) docker inspect embed-certs-834340:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59",
	        "Created": "2025-10-19T13:15:37.885260353Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 493611,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T13:17:21.675656585Z",
	            "FinishedAt": "2025-10-19T13:17:20.815460179Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/hostname",
	        "HostsPath": "/var/lib/docker/containers/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/hosts",
	        "LogPath": "/var/lib/docker/containers/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59-json.log",
	        "Name": "/embed-certs-834340",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-834340:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-834340",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59",
	                "LowerDir": "/var/lib/docker/overlay2/fd9e9f7bbe80ae9f84f50f65044e2fc095d54180303dacdaaf2af69ede890f60-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fd9e9f7bbe80ae9f84f50f65044e2fc095d54180303dacdaaf2af69ede890f60/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fd9e9f7bbe80ae9f84f50f65044e2fc095d54180303dacdaaf2af69ede890f60/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fd9e9f7bbe80ae9f84f50f65044e2fc095d54180303dacdaaf2af69ede890f60/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-834340",
	                "Source": "/var/lib/docker/volumes/embed-certs-834340/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-834340",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-834340",
	                "name.minikube.sigs.k8s.io": "embed-certs-834340",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f9ea60cc0940c33186ba2db94015034d39edbe57374748acf72e3ba5630448e",
	            "SandboxKey": "/var/run/docker/netns/2f9ea60cc094",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-834340": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:2c:25:03:e1:2a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4736119f136360f6c549379b3521579c84fb2cab47b61b166d29a201ac636c1c",
	                    "EndpointID": "463d4ff8c65d9cd7d4c11dfec1b89a5b28f89fd5452677e910c937ba9be8a5f1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-834340",
	                        "9a5cfef083e8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-834340 -n embed-certs-834340
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-834340 -n embed-certs-834340: exit status 2 (486.671976ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-834340 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-834340 logs -n 25: (1.982092543s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p no-preload-108149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ stop    │ -p no-preload-108149 --alsologtostderr -v=3                                                                                                                              │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ addons  │ enable dashboard -p no-preload-108149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ start   │ -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:16 UTC │
	│ image   │ old-k8s-version-842494 image list --format=json                                                                                                                          │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ pause   │ -p old-k8s-version-842494 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ delete  │ -p old-k8s-version-842494                                                                                                                                                │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ delete  │ -p old-k8s-version-842494                                                                                                                                                │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ start   │ -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:16 UTC │
	│ image   │ no-preload-108149 image list --format=json                                                                                                                               │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ pause   │ -p no-preload-108149 --alsologtostderr -v=1                                                                                                                              │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │                     │
	│ delete  │ -p no-preload-108149                                                                                                                                                     │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ delete  │ -p no-preload-108149                                                                                                                                                     │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ delete  │ -p disable-driver-mounts-418719                                                                                                                                          │ disable-driver-mounts-418719 │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ start   │ -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-834340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │                     │
	│ stop    │ -p embed-certs-834340 --alsologtostderr -v=3                                                                                                                             │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-834340 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:17 UTC │
	│ start   │ -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-455348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-455348 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-455348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ start   │ -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │                     │
	│ image   │ embed-certs-834340 image list --format=json                                                                                                                              │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ pause   │ -p embed-certs-834340 --alsologtostderr -v=1                                                                                                                             │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 13:18:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 13:18:21.540162  496573 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:18:21.540367  496573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:18:21.540397  496573 out.go:374] Setting ErrFile to fd 2...
	I1019 13:18:21.540421  496573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:18:21.540698  496573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:18:21.541162  496573 out.go:368] Setting JSON to false
	I1019 13:18:21.542294  496573 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10852,"bootTime":1760869050,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 13:18:21.542406  496573 start.go:141] virtualization:  
	I1019 13:18:21.545586  496573 out.go:179] * [default-k8s-diff-port-455348] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 13:18:21.549490  496573 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:18:21.549673  496573 notify.go:220] Checking for updates...
	I1019 13:18:21.555924  496573 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:18:21.559277  496573 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:18:21.562320  496573 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 13:18:21.565406  496573 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 13:18:21.568364  496573 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:18:21.571937  496573 config.go:182] Loaded profile config "default-k8s-diff-port-455348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:18:21.572714  496573 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:18:21.604700  496573 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 13:18:21.604828  496573 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:18:21.665785  496573 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:18:21.656439448 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:18:21.665899  496573 docker.go:318] overlay module found
	I1019 13:18:21.668993  496573 out.go:179] * Using the docker driver based on existing profile
	I1019 13:18:21.671685  496573 start.go:305] selected driver: docker
	I1019 13:18:21.671704  496573 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-455348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-455348 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:18:21.671811  496573 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:18:21.672546  496573 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:18:21.745013  496573 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:18:21.735243467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:18:21.745355  496573 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:18:21.745404  496573 cni.go:84] Creating CNI manager for ""
	I1019 13:18:21.745467  496573 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:18:21.745513  496573 start.go:349] cluster config:
	{Name:default-k8s-diff-port-455348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-455348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:18:21.751430  496573 out.go:179] * Starting "default-k8s-diff-port-455348" primary control-plane node in "default-k8s-diff-port-455348" cluster
	I1019 13:18:21.754281  496573 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 13:18:21.757257  496573 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 13:18:21.760200  496573 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:18:21.760264  496573 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 13:18:21.760277  496573 cache.go:58] Caching tarball of preloaded images
	I1019 13:18:21.760305  496573 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 13:18:21.760380  496573 preload.go:233] Found /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 13:18:21.760389  496573 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 13:18:21.760498  496573 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/config.json ...
	I1019 13:18:21.786722  496573 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 13:18:21.786742  496573 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 13:18:21.786777  496573 cache.go:232] Successfully downloaded all kic artifacts
	I1019 13:18:21.786801  496573 start.go:360] acquireMachinesLock for default-k8s-diff-port-455348: {Name:mk240c57fae30746abb498299da3308a8a0334da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:18:21.786867  496573 start.go:364] duration metric: took 48.862µs to acquireMachinesLock for "default-k8s-diff-port-455348"
	I1019 13:18:21.786888  496573 start.go:96] Skipping create...Using existing machine configuration
	I1019 13:18:21.786894  496573 fix.go:54] fixHost starting: 
	I1019 13:18:21.787147  496573 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-455348 --format={{.State.Status}}
	I1019 13:18:21.804524  496573 fix.go:112] recreateIfNeeded on default-k8s-diff-port-455348: state=Stopped err=<nil>
	W1019 13:18:21.804552  496573 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 13:18:21.807900  496573 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-455348" ...
	I1019 13:18:21.807992  496573 cli_runner.go:164] Run: docker start default-k8s-diff-port-455348
	I1019 13:18:22.059879  496573 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-455348 --format={{.State.Status}}
	I1019 13:18:22.091504  496573 kic.go:430] container "default-k8s-diff-port-455348" state is running.
	I1019 13:18:22.092034  496573 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-455348
	I1019 13:18:22.116803  496573 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/config.json ...
	I1019 13:18:22.117259  496573 machine.go:93] provisionDockerMachine start ...
	I1019 13:18:22.117336  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:22.152887  496573 main.go:141] libmachine: Using SSH client type: native
	I1019 13:18:22.153259  496573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1019 13:18:22.153269  496573 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 13:18:22.153985  496573 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35922->127.0.0.1:33453: read: connection reset by peer
	I1019 13:18:25.301379  496573 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-455348
	
	I1019 13:18:25.301411  496573 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-455348"
	I1019 13:18:25.301475  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:25.320007  496573 main.go:141] libmachine: Using SSH client type: native
	I1019 13:18:25.320900  496573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1019 13:18:25.320918  496573 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-455348 && echo "default-k8s-diff-port-455348" | sudo tee /etc/hostname
	I1019 13:18:25.481093  496573 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-455348
	
	I1019 13:18:25.481179  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:25.499094  496573 main.go:141] libmachine: Using SSH client type: native
	I1019 13:18:25.499410  496573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1019 13:18:25.499437  496573 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-455348' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-455348/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-455348' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 13:18:25.645895  496573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 13:18:25.645979  496573 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 13:18:25.646038  496573 ubuntu.go:190] setting up certificates
	I1019 13:18:25.646073  496573 provision.go:84] configureAuth start
	I1019 13:18:25.646172  496573 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-455348
	I1019 13:18:25.662741  496573 provision.go:143] copyHostCerts
	I1019 13:18:25.662812  496573 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem, removing ...
	I1019 13:18:25.662833  496573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem
	I1019 13:18:25.662918  496573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 13:18:25.663030  496573 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem, removing ...
	I1019 13:18:25.663042  496573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem
	I1019 13:18:25.663070  496573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 13:18:25.663144  496573 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem, removing ...
	I1019 13:18:25.663153  496573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem
	I1019 13:18:25.663178  496573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 13:18:25.663243  496573 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-455348 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-455348 localhost minikube]
	I1019 13:18:25.976151  496573 provision.go:177] copyRemoteCerts
	I1019 13:18:25.976231  496573 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 13:18:25.976272  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:25.995607  496573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:18:26.113383  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 13:18:26.131481  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1019 13:18:26.150518  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 13:18:26.168626  496573 provision.go:87] duration metric: took 522.513446ms to configureAuth
	I1019 13:18:26.168656  496573 ubuntu.go:206] setting minikube options for container-runtime
	I1019 13:18:26.168857  496573 config.go:182] Loaded profile config "default-k8s-diff-port-455348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:18:26.168969  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:26.186727  496573 main.go:141] libmachine: Using SSH client type: native
	I1019 13:18:26.187047  496573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1019 13:18:26.187073  496573 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 13:18:26.502325  496573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 13:18:26.502353  496573 machine.go:96] duration metric: took 4.385081913s to provisionDockerMachine
	I1019 13:18:26.502364  496573 start.go:293] postStartSetup for "default-k8s-diff-port-455348" (driver="docker")
	I1019 13:18:26.502375  496573 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 13:18:26.502442  496573 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 13:18:26.502484  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:26.523004  496573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:18:26.629883  496573 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 13:18:26.633291  496573 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 13:18:26.633318  496573 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 13:18:26.633330  496573 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/addons for local assets ...
	I1019 13:18:26.633381  496573 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/files for local assets ...
	I1019 13:18:26.633462  496573 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem -> 2945182.pem in /etc/ssl/certs
	I1019 13:18:26.633564  496573 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 13:18:26.641241  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:18:26.659234  496573 start.go:296] duration metric: took 156.854739ms for postStartSetup
	I1019 13:18:26.659315  496573 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 13:18:26.659386  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:26.677727  496573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:18:26.780612  496573 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 13:18:26.785941  496573 fix.go:56] duration metric: took 4.999040058s for fixHost
	I1019 13:18:26.785967  496573 start.go:83] releasing machines lock for "default-k8s-diff-port-455348", held for 4.999090815s
	I1019 13:18:26.786044  496573 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-455348
	I1019 13:18:26.802566  496573 ssh_runner.go:195] Run: cat /version.json
	I1019 13:18:26.802621  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:26.802876  496573 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 13:18:26.802937  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:26.825145  496573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:18:26.835235  496573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:18:26.925402  496573 ssh_runner.go:195] Run: systemctl --version
	I1019 13:18:27.022488  496573 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 13:18:27.085158  496573 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 13:18:27.093913  496573 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 13:18:27.094013  496573 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 13:18:27.103737  496573 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 13:18:27.103788  496573 start.go:495] detecting cgroup driver to use...
	I1019 13:18:27.103870  496573 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 13:18:27.103943  496573 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 13:18:27.133073  496573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 13:18:27.147883  496573 docker.go:218] disabling cri-docker service (if available) ...
	I1019 13:18:27.147987  496573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 13:18:27.164439  496573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 13:18:27.178049  496573 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 13:18:27.318021  496573 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 13:18:27.458546  496573 docker.go:234] disabling docker service ...
	I1019 13:18:27.458691  496573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 13:18:27.476487  496573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 13:18:27.490783  496573 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 13:18:27.665360  496573 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 13:18:27.846822  496573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 13:18:27.863159  496573 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 13:18:27.889305  496573 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 13:18:27.889368  496573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:27.909962  496573 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 13:18:27.910030  496573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:27.920638  496573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:27.933051  496573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:27.945761  496573 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 13:18:27.954172  496573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:27.967798  496573 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:27.977139  496573 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:27.988498  496573 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 13:18:27.997550  496573 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 13:18:28.007935  496573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:18:28.205248  496573 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 13:18:28.348676  496573 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 13:18:28.348754  496573 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 13:18:28.353007  496573 start.go:563] Will wait 60s for crictl version
	I1019 13:18:28.353068  496573 ssh_runner.go:195] Run: which crictl
	I1019 13:18:28.360341  496573 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 13:18:28.421076  496573 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 13:18:28.421188  496573 ssh_runner.go:195] Run: crio --version
	I1019 13:18:28.460680  496573 ssh_runner.go:195] Run: crio --version
	I1019 13:18:28.503821  496573 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 13:18:28.506915  496573 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-455348 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:18:28.530375  496573 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 13:18:28.537035  496573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:18:28.547802  496573 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-455348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-455348 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 13:18:28.547921  496573 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:18:28.547988  496573 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:18:28.586475  496573 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:18:28.586494  496573 crio.go:433] Images already preloaded, skipping extraction
	I1019 13:18:28.586538  496573 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:18:28.627103  496573 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:18:28.627125  496573 cache_images.go:85] Images are preloaded, skipping loading
	I1019 13:18:28.627133  496573 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1019 13:18:28.627263  496573 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-455348 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-455348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 13:18:28.627353  496573 ssh_runner.go:195] Run: crio config
	I1019 13:18:28.704851  496573 cni.go:84] Creating CNI manager for ""
	I1019 13:18:28.704877  496573 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:18:28.704898  496573 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 13:18:28.704925  496573 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-455348 NodeName:default-k8s-diff-port-455348 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 13:18:28.705062  496573 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-455348"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 13:18:28.705135  496573 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 13:18:28.713212  496573 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 13:18:28.713286  496573 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 13:18:28.720726  496573 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1019 13:18:28.734174  496573 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 13:18:28.748809  496573 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1019 13:18:28.762503  496573 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 13:18:28.766312  496573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:18:28.776381  496573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:18:28.897873  496573 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:18:28.916528  496573 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348 for IP: 192.168.76.2
	I1019 13:18:28.916550  496573 certs.go:195] generating shared ca certs ...
	I1019 13:18:28.916567  496573 certs.go:227] acquiring lock for ca certs: {Name:mk8f2f1c683cf5104ef70f6f3d59bf8f6240d633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:18:28.916741  496573 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key
	I1019 13:18:28.916799  496573 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key
	I1019 13:18:28.916821  496573 certs.go:257] generating profile certs ...
	I1019 13:18:28.916927  496573 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.key
	I1019 13:18:28.917014  496573 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/apiserver.key.223e319e
	I1019 13:18:28.917065  496573 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/proxy-client.key
	I1019 13:18:28.917190  496573 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem (1338 bytes)
	W1019 13:18:28.917237  496573 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518_empty.pem, impossibly tiny 0 bytes
	I1019 13:18:28.917250  496573 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 13:18:28.917282  496573 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem (1082 bytes)
	I1019 13:18:28.917310  496573 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem (1123 bytes)
	I1019 13:18:28.917335  496573 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem (1679 bytes)
	I1019 13:18:28.917391  496573 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:18:28.918149  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 13:18:28.936531  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 13:18:28.954178  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 13:18:28.971416  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 13:18:28.989202  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1019 13:18:29.029614  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 13:18:29.049013  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 13:18:29.079525  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 13:18:29.139527  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem --> /usr/share/ca-certificates/294518.pem (1338 bytes)
	I1019 13:18:29.183461  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /usr/share/ca-certificates/2945182.pem (1708 bytes)
	I1019 13:18:29.234775  496573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 13:18:29.256094  496573 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 13:18:29.271752  496573 ssh_runner.go:195] Run: openssl version
	I1019 13:18:29.279834  496573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294518.pem && ln -fs /usr/share/ca-certificates/294518.pem /etc/ssl/certs/294518.pem"
	I1019 13:18:29.289035  496573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294518.pem
	I1019 13:18:29.293486  496573 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:20 /usr/share/ca-certificates/294518.pem
	I1019 13:18:29.293605  496573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294518.pem
	I1019 13:18:29.346669  496573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294518.pem /etc/ssl/certs/51391683.0"
	I1019 13:18:29.355542  496573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2945182.pem && ln -fs /usr/share/ca-certificates/2945182.pem /etc/ssl/certs/2945182.pem"
	I1019 13:18:29.364470  496573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2945182.pem
	I1019 13:18:29.368960  496573 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:20 /usr/share/ca-certificates/2945182.pem
	I1019 13:18:29.369074  496573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2945182.pem
	I1019 13:18:29.413391  496573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2945182.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 13:18:29.422322  496573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 13:18:29.430738  496573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:18:29.434819  496573 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:18:29.434925  496573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:18:29.478327  496573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 13:18:29.486540  496573 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 13:18:29.490505  496573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 13:18:29.532956  496573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 13:18:29.577751  496573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 13:18:29.619524  496573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 13:18:29.668087  496573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 13:18:29.721112  496573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 13:18:29.766001  496573 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-455348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-455348 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:18:29.766137  496573 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 13:18:29.766241  496573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 13:18:29.861392  496573 cri.go:89] found id: "9dc424071c1b92771542bfccd38e435461e8182ac00adb300909438d1cbf9b8f"
	I1019 13:18:29.861416  496573 cri.go:89] found id: "b34e96695557c6959cce715a57b32eef60a662626ab95fd5b08a3505f2cfe53a"
	I1019 13:18:29.861431  496573 cri.go:89] found id: "e5b09162fcaf4578399f5a03831d7d61cf4bfd1901478ea7fed991f19b9f174e"
	I1019 13:18:29.861436  496573 cri.go:89] found id: ""
	I1019 13:18:29.861527  496573 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 13:18:29.893771  496573 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:18:29Z" level=error msg="open /run/runc: no such file or directory"
	I1019 13:18:29.893899  496573 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 13:18:29.913836  496573 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 13:18:29.913858  496573 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 13:18:29.913940  496573 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 13:18:29.929442  496573 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 13:18:29.930356  496573 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-455348" does not appear in /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:18:29.930973  496573 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-292654/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-455348" cluster setting kubeconfig missing "default-k8s-diff-port-455348" context setting]
	I1019 13:18:29.931970  496573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:18:29.934201  496573 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 13:18:29.963606  496573 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1019 13:18:29.963692  496573 kubeadm.go:601] duration metric: took 49.826861ms to restartPrimaryControlPlane
	I1019 13:18:29.963716  496573 kubeadm.go:402] duration metric: took 197.723226ms to StartCluster
	I1019 13:18:29.963750  496573 settings.go:142] acquiring lock: {Name:mk1099ab6cbf86eca031b5f8e2b43952c9c0f84f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:18:29.963832  496573 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:18:29.965374  496573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:18:29.965806  496573 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:18:29.966248  496573 config.go:182] Loaded profile config "default-k8s-diff-port-455348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:18:29.966222  496573 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 13:18:29.966399  496573 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-455348"
	I1019 13:18:29.966446  496573 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-455348"
	W1019 13:18:29.966466  496573 addons.go:247] addon storage-provisioner should already be in state true
	I1019 13:18:29.966501  496573 host.go:66] Checking if "default-k8s-diff-port-455348" exists ...
	I1019 13:18:29.967103  496573 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-455348 --format={{.State.Status}}
	I1019 13:18:29.967289  496573 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-455348"
	I1019 13:18:29.967333  496573 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-455348"
	W1019 13:18:29.967353  496573 addons.go:247] addon dashboard should already be in state true
	I1019 13:18:29.967409  496573 host.go:66] Checking if "default-k8s-diff-port-455348" exists ...
	I1019 13:18:29.967624  496573 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-455348"
	I1019 13:18:29.967639  496573 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-455348"
	I1019 13:18:29.967867  496573 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-455348 --format={{.State.Status}}
	I1019 13:18:29.968338  496573 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-455348 --format={{.State.Status}}
	I1019 13:18:29.972667  496573 out.go:179] * Verifying Kubernetes components...
	I1019 13:18:29.979311  496573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:18:30.034501  496573 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 13:18:30.039105  496573 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:18:30.039129  496573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 13:18:30.039199  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:30.039443  496573 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-455348"
	W1019 13:18:30.039458  496573 addons.go:247] addon default-storageclass should already be in state true
	I1019 13:18:30.039486  496573 host.go:66] Checking if "default-k8s-diff-port-455348" exists ...
	I1019 13:18:30.039915  496573 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-455348 --format={{.State.Status}}
	I1019 13:18:30.058246  496573 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 13:18:30.061853  496573 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 13:18:30.064878  496573 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 13:18:30.064909  496573 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 13:18:30.064998  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:30.104383  496573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:18:30.114287  496573 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 13:18:30.114309  496573 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 13:18:30.114377  496573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:18:30.129922  496573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:18:30.163696  496573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:18:30.503358  496573 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 13:18:30.503379  496573 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 13:18:30.535033  496573 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 13:18:30.535054  496573 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 13:18:30.590569  496573 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:18:30.594046  496573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:18:30.609971  496573 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 13:18:30.610003  496573 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 13:18:30.642230  496573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 13:18:30.676519  496573 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-455348" to be "Ready" ...
	I1019 13:18:30.708490  496573 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 13:18:30.708509  496573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 13:18:30.780630  496573 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 13:18:30.780661  496573 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 13:18:30.857193  496573 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 13:18:30.857215  496573 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 13:18:30.894618  496573 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 13:18:30.894639  496573 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 13:18:30.962902  496573 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 13:18:30.962930  496573 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 13:18:30.983532  496573 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 13:18:30.983569  496573 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 13:18:31.028783  496573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.813115183Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.816311297Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.816345701Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.816366822Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.819438527Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.819475992Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.819497974Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.822752779Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.822788374Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.822811094Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.825867751Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:18:16 embed-certs-834340 crio[652]: time="2025-10-19T13:18:16.825933483Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.048864531Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=41b3bfc6-9939-464d-bcc2-750ac0e08129 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.049715053Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d588a416-0757-4213-ad11-938729c2db21 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.050749816Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jkcd/dashboard-metrics-scraper" id=a194e46a-1843-4bd4-91cd-634dbbebb2d4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.050998952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.05838818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.059033252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.104275074Z" level=info msg="Created container f2b22a7c199217cfb9c5c6c994f073ef81dd212c2d9eb9450de07cf8ab355502: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jkcd/dashboard-metrics-scraper" id=a194e46a-1843-4bd4-91cd-634dbbebb2d4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.110103469Z" level=info msg="Starting container: f2b22a7c199217cfb9c5c6c994f073ef81dd212c2d9eb9450de07cf8ab355502" id=c209a67b-9848-46cb-b6ba-0071a897075a name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.11215834Z" level=info msg="Started container" PID=1734 containerID=f2b22a7c199217cfb9c5c6c994f073ef81dd212c2d9eb9450de07cf8ab355502 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jkcd/dashboard-metrics-scraper id=c209a67b-9848-46cb-b6ba-0071a897075a name=/runtime.v1.RuntimeService/StartContainer sandboxID=56400a7f42af70fef47bfdecec34fa2d48b5ae7e0a39ba11b4050522612868c8
	Oct 19 13:18:27 embed-certs-834340 conmon[1730]: conmon f2b22a7c199217cfb9c5 <ninfo>: container 1734 exited with status 1
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.337106947Z" level=info msg="Removing container: 4c74d8cafc8ba306ec13a47045375ad7bda67a0b6040b076c92502e90a2fb40e" id=a9410066-8983-4630-b81f-04cbfea6b8d6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.347763248Z" level=info msg="Error loading conmon cgroup of container 4c74d8cafc8ba306ec13a47045375ad7bda67a0b6040b076c92502e90a2fb40e: cgroup deleted" id=a9410066-8983-4630-b81f-04cbfea6b8d6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:18:27 embed-certs-834340 crio[652]: time="2025-10-19T13:18:27.373801239Z" level=info msg="Removed container 4c74d8cafc8ba306ec13a47045375ad7bda67a0b6040b076c92502e90a2fb40e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jkcd/dashboard-metrics-scraper" id=a9410066-8983-4630-b81f-04cbfea6b8d6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f2b22a7c19921       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago        Exited              dashboard-metrics-scraper   3                   56400a7f42af7       dashboard-metrics-scraper-6ffb444bf9-9jkcd   kubernetes-dashboard
	001f191f75075       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   67b876e70c68d       storage-provisioner                          kube-system
	1c4acb28dc65c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   48 seconds ago       Running             kubernetes-dashboard        0                   d29b3670f80ae       kubernetes-dashboard-855c9754f9-m9x8r        kubernetes-dashboard
	b8c9fc48127f6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   cb24a785b353a       coredns-66bc5c9577-sgj8p                     kube-system
	7aa060bc9ee4f       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   91f2df5cd79a7       busybox                                      default
	dcd1e089da4e3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   1f2e1c6c33265       kube-proxy-2skj7                             kube-system
	b855e342325c3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   481e1e76112b0       kindnet-cbzm8                                kube-system
	31231e1c742bd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   67b876e70c68d       storage-provisioner                          kube-system
	df2f9b832fba0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   9f04ef29e043e       etcd-embed-certs-834340                      kube-system
	716882266ac3c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   7005f8628621d       kube-scheduler-embed-certs-834340            kube-system
	c18df00f28ee5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   93dc6de41d205       kube-apiserver-embed-certs-834340            kube-system
	039382e4cf978       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   7326fc743321b       kube-controller-manager-embed-certs-834340   kube-system
	
	
	==> coredns [b8c9fc48127f67fc25c4e79ef9da91ed21a917166d93bd3a182b72817f225588] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43480 - 17811 "HINFO IN 5154733616253157874.8180394126122793850. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013301227s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-834340
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-834340
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=embed-certs-834340
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T13_16_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 13:16:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-834340
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 13:18:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 13:18:06 +0000   Sun, 19 Oct 2025 13:15:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 13:18:06 +0000   Sun, 19 Oct 2025 13:15:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 13:18:06 +0000   Sun, 19 Oct 2025 13:15:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 13:18:06 +0000   Sun, 19 Oct 2025 13:16:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-834340
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                89f6ba5e-d968-48de-b86a-37b91a3521e1
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-sgj8p                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m23s
	  kube-system                 etcd-embed-certs-834340                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m29s
	  kube-system                 kindnet-cbzm8                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m24s
	  kube-system                 kube-apiserver-embed-certs-834340             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-controller-manager-embed-certs-834340    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-proxy-2skj7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-scheduler-embed-certs-834340             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-9jkcd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-m9x8r         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m22s                  kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Normal   Starting                 2m37s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node embed-certs-834340 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node embed-certs-834340 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m37s (x8 over 2m37s)  kubelet          Node embed-certs-834340 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m29s                  kubelet          Node embed-certs-834340 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m29s                  kubelet          Node embed-certs-834340 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m29s                  kubelet          Node embed-certs-834340 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m29s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m24s                  node-controller  Node embed-certs-834340 event: Registered Node embed-certs-834340 in Controller
	  Normal   NodeReady                102s                   kubelet          Node embed-certs-834340 status is now: NodeReady
	  Normal   Starting                 67s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  66s (x8 over 66s)      kubelet          Node embed-certs-834340 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 66s)      kubelet          Node embed-certs-834340 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x8 over 66s)      kubelet          Node embed-certs-834340 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node embed-certs-834340 event: Registered Node embed-certs-834340 in Controller
	
	
	==> dmesg <==
	[Oct19 12:56] overlayfs: idmapped layers are currently not supported
	[ +16.315179] overlayfs: idmapped layers are currently not supported
	[ +11.914063] overlayfs: idmapped layers are currently not supported
	[Oct19 12:57] overlayfs: idmapped layers are currently not supported
	[Oct19 12:58] overlayfs: idmapped layers are currently not supported
	[ +48.481184] overlayfs: idmapped layers are currently not supported
	[Oct19 12:59] overlayfs: idmapped layers are currently not supported
	[Oct19 13:00] overlayfs: idmapped layers are currently not supported
	[Oct19 13:01] overlayfs: idmapped layers are currently not supported
	[Oct19 13:04] overlayfs: idmapped layers are currently not supported
	[Oct19 13:05] overlayfs: idmapped layers are currently not supported
	[Oct19 13:06] overlayfs: idmapped layers are currently not supported
	[Oct19 13:08] overlayfs: idmapped layers are currently not supported
	[ +38.759554] overlayfs: idmapped layers are currently not supported
	[Oct19 13:10] overlayfs: idmapped layers are currently not supported
	[Oct19 13:11] overlayfs: idmapped layers are currently not supported
	[Oct19 13:12] overlayfs: idmapped layers are currently not supported
	[ +39.991818] overlayfs: idmapped layers are currently not supported
	[Oct19 13:13] overlayfs: idmapped layers are currently not supported
	[Oct19 13:14] overlayfs: idmapped layers are currently not supported
	[Oct19 13:15] overlayfs: idmapped layers are currently not supported
	[ +34.413925] overlayfs: idmapped layers are currently not supported
	[Oct19 13:17] overlayfs: idmapped layers are currently not supported
	[ +27.716246] overlayfs: idmapped layers are currently not supported
	[Oct19 13:18] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [df2f9b832fba0474917a867bc16694bb71f4c9133c4184692e7e5197a908612c] <==
	{"level":"warn","ts":"2025-10-19T13:17:34.314610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.336104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.356759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.371152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.396413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.418087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.448765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.465516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.488141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.492067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.515670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.525171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.565917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.575191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.585919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.612986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.630504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.640405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.656055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.677498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.691099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.742336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.762657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.778697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:17:34.866054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58510","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:18:35 up  3:01,  0 user,  load average: 3.49, 3.34, 2.86
	Linux embed-certs-834340 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b855e342325c3ece53dabdea13c7937afcd20c23726eca4569481c9fd68ab9dc] <==
	I1019 13:17:36.624716       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:17:36.624939       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 13:17:36.625071       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:17:36.625082       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:17:36.625091       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:17:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:17:36.812115       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:17:36.812159       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:17:36.812172       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:17:36.813074       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 13:18:06.809306       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 13:18:06.812836       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1019 13:18:06.812948       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 13:18:06.813029       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1019 13:18:08.012964       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 13:18:08.012999       1 metrics.go:72] Registering metrics
	I1019 13:18:08.013063       1 controller.go:711] "Syncing nftables rules"
	I1019 13:18:16.809009       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 13:18:16.809061       1 main.go:301] handling current node
	I1019 13:18:26.818015       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 13:18:26.818131       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c18df00f28ee52ba5914d4eb54d1df3a03b3eb40ef6d981c61e6b91411a7fcf5] <==
	I1019 13:17:35.722451       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 13:17:35.722502       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 13:17:35.722692       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 13:17:35.722746       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 13:17:35.729260       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 13:17:35.729293       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 13:17:35.729447       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 13:17:35.742182       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 13:17:35.742208       1 policy_source.go:240] refreshing policies
	I1019 13:17:35.742384       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:17:35.745481       1 cache.go:39] Caches are synced for autoregister controller
	I1019 13:17:35.745492       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1019 13:17:35.775513       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 13:17:35.803057       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 13:17:35.978960       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 13:17:36.427356       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 13:17:36.636835       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 13:17:36.788327       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 13:17:36.836822       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 13:17:36.856373       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 13:17:36.970819       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.0.201"}
	I1019 13:17:37.047654       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.222.169"}
	I1019 13:17:39.399531       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 13:17:39.570375       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 13:17:39.616931       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [039382e4cf978d4d0d233ab6e8648f97661496f0b0c36cdb5fac731f9f4f34fd] <==
	I1019 13:17:38.991402       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 13:17:38.991531       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 13:17:38.992658       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 13:17:38.992727       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 13:17:38.992744       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 13:17:38.992768       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 13:17:38.997873       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 13:17:38.999152       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:17:39.005465       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 13:17:39.007809       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 13:17:39.012275       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 13:17:39.015449       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 13:17:39.017740       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 13:17:39.017903       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 13:17:39.018010       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-834340"
	I1019 13:17:39.018078       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 13:17:39.025302       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 13:17:39.025543       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 13:17:39.030480       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 13:17:39.035901       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 13:17:39.038179       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 13:17:39.041860       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:17:39.041881       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 13:17:39.041889       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 13:17:39.068410       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [dcd1e089da4e3c88ca65e629976e4d87c834a1278e0da3fa1d073128a1540f9b] <==
	I1019 13:17:36.690534       1 server_linux.go:53] "Using iptables proxy"
	I1019 13:17:36.908345       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 13:17:37.030243       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 13:17:37.030282       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 13:17:37.030357       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 13:17:37.122591       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:17:37.122646       1 server_linux.go:132] "Using iptables Proxier"
	I1019 13:17:37.141471       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 13:17:37.142050       1 server.go:527] "Version info" version="v1.34.1"
	I1019 13:17:37.142079       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:17:37.146827       1 config.go:200] "Starting service config controller"
	I1019 13:17:37.146913       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 13:17:37.147009       1 config.go:309] "Starting node config controller"
	I1019 13:17:37.147045       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 13:17:37.147074       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 13:17:37.147244       1 config.go:106] "Starting endpoint slice config controller"
	I1019 13:17:37.147264       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 13:17:37.147280       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 13:17:37.147285       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 13:17:37.247772       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 13:17:37.247780       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 13:17:37.247815       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [716882266ac3c47cb6251f516b90c4cf3cc2bc032ff7bc8e2159a3543b734128] <==
	I1019 13:17:33.347394       1 serving.go:386] Generated self-signed cert in-memory
	W1019 13:17:35.498066       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 13:17:35.498157       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 13:17:35.498192       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 13:17:35.498239       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 13:17:35.614937       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 13:17:35.614971       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:17:35.627567       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 13:17:35.627811       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:17:35.629755       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:17:35.629885       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1019 13:17:35.650193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1019 13:17:35.678541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 13:17:35.678618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 13:17:35.697277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 13:17:35.697416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1019 13:17:36.633792       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 13:17:40 embed-certs-834340 kubelet[779]: E1019 13:17:40.712642     779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/86d791c0-5ed1-48b8-acec-70e583fc2449-kube-api-access-pjzp6 podName:86d791c0-5ed1-48b8-acec-70e583fc2449 nodeName:}" failed. No retries permitted until 2025-10-19 13:17:41.212626015 +0000 UTC m=+12.388168355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pjzp6" (UniqueName: "kubernetes.io/projected/86d791c0-5ed1-48b8-acec-70e583fc2449-kube-api-access-pjzp6") pod "kubernetes-dashboard-855c9754f9-m9x8r" (UID: "86d791c0-5ed1-48b8-acec-70e583fc2449") : failed to sync configmap cache: timed out waiting for the condition
	Oct 19 13:17:41 embed-certs-834340 kubelet[779]: W1019 13:17:41.393310     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/crio-d29b3670f80aea16a1b3d73ca6c8ad2026e43fa3e3c7ef5d758afe709c42ef5c WatchSource:0}: Error finding container d29b3670f80aea16a1b3d73ca6c8ad2026e43fa3e3c7ef5d758afe709c42ef5c: Status 404 returned error can't find the container with id d29b3670f80aea16a1b3d73ca6c8ad2026e43fa3e3c7ef5d758afe709c42ef5c
	Oct 19 13:17:41 embed-certs-834340 kubelet[779]: W1019 13:17:41.402453     779 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9a5cfef083e8849f0ec7d66f7dc1499fe9a0cc436a31cc955bbf0d5c60f11e59/crio-56400a7f42af70fef47bfdecec34fa2d48b5ae7e0a39ba11b4050522612868c8 WatchSource:0}: Error finding container 56400a7f42af70fef47bfdecec34fa2d48b5ae7e0a39ba11b4050522612868c8: Status 404 returned error can't find the container with id 56400a7f42af70fef47bfdecec34fa2d48b5ae7e0a39ba11b4050522612868c8
	Oct 19 13:17:44 embed-certs-834340 kubelet[779]: I1019 13:17:44.223853     779 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 19 13:17:47 embed-certs-834340 kubelet[779]: I1019 13:17:47.223529     779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m9x8r" podStartSLOduration=3.419142011 podStartE2EDuration="8.223514s" podCreationTimestamp="2025-10-19 13:17:39 +0000 UTC" firstStartedPulling="2025-10-19 13:17:41.396044933 +0000 UTC m=+12.571587232" lastFinishedPulling="2025-10-19 13:17:46.200416914 +0000 UTC m=+17.375959221" observedRunningTime="2025-10-19 13:17:47.223165384 +0000 UTC m=+18.398707691" watchObservedRunningTime="2025-10-19 13:17:47.223514 +0000 UTC m=+18.399056298"
	Oct 19 13:17:51 embed-certs-834340 kubelet[779]: I1019 13:17:51.217130     779 scope.go:117] "RemoveContainer" containerID="6359157b01599f4cee7dfe5237f34018dc4918b0418707f6357f49ca811eefcc"
	Oct 19 13:17:52 embed-certs-834340 kubelet[779]: I1019 13:17:52.221663     779 scope.go:117] "RemoveContainer" containerID="6359157b01599f4cee7dfe5237f34018dc4918b0418707f6357f49ca811eefcc"
	Oct 19 13:17:52 embed-certs-834340 kubelet[779]: I1019 13:17:52.222002     779 scope.go:117] "RemoveContainer" containerID="3cf4ba1462823f3fbca92b6d5cc30f2d515b2efbad2de067c41050e451024638"
	Oct 19 13:17:52 embed-certs-834340 kubelet[779]: E1019 13:17:52.222157     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jkcd_kubernetes-dashboard(15b5ab5d-1dc4-4250-afe7-f70dda24b9a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jkcd" podUID="15b5ab5d-1dc4-4250-afe7-f70dda24b9a6"
	Oct 19 13:17:53 embed-certs-834340 kubelet[779]: I1019 13:17:53.224893     779 scope.go:117] "RemoveContainer" containerID="3cf4ba1462823f3fbca92b6d5cc30f2d515b2efbad2de067c41050e451024638"
	Oct 19 13:17:53 embed-certs-834340 kubelet[779]: E1019 13:17:53.225045     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jkcd_kubernetes-dashboard(15b5ab5d-1dc4-4250-afe7-f70dda24b9a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jkcd" podUID="15b5ab5d-1dc4-4250-afe7-f70dda24b9a6"
	Oct 19 13:18:01 embed-certs-834340 kubelet[779]: I1019 13:18:01.359580     779 scope.go:117] "RemoveContainer" containerID="3cf4ba1462823f3fbca92b6d5cc30f2d515b2efbad2de067c41050e451024638"
	Oct 19 13:18:02 embed-certs-834340 kubelet[779]: I1019 13:18:02.259603     779 scope.go:117] "RemoveContainer" containerID="3cf4ba1462823f3fbca92b6d5cc30f2d515b2efbad2de067c41050e451024638"
	Oct 19 13:18:02 embed-certs-834340 kubelet[779]: I1019 13:18:02.259894     779 scope.go:117] "RemoveContainer" containerID="4c74d8cafc8ba306ec13a47045375ad7bda67a0b6040b076c92502e90a2fb40e"
	Oct 19 13:18:02 embed-certs-834340 kubelet[779]: E1019 13:18:02.260243     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jkcd_kubernetes-dashboard(15b5ab5d-1dc4-4250-afe7-f70dda24b9a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jkcd" podUID="15b5ab5d-1dc4-4250-afe7-f70dda24b9a6"
	Oct 19 13:18:07 embed-certs-834340 kubelet[779]: I1019 13:18:07.276404     779 scope.go:117] "RemoveContainer" containerID="31231e1c742bdbc0a3dba61c64b968fd68a7bb9fa8d9ab32f58da69d755f6dcc"
	Oct 19 13:18:11 embed-certs-834340 kubelet[779]: I1019 13:18:11.359155     779 scope.go:117] "RemoveContainer" containerID="4c74d8cafc8ba306ec13a47045375ad7bda67a0b6040b076c92502e90a2fb40e"
	Oct 19 13:18:11 embed-certs-834340 kubelet[779]: E1019 13:18:11.359790     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jkcd_kubernetes-dashboard(15b5ab5d-1dc4-4250-afe7-f70dda24b9a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jkcd" podUID="15b5ab5d-1dc4-4250-afe7-f70dda24b9a6"
	Oct 19 13:18:27 embed-certs-834340 kubelet[779]: I1019 13:18:27.048422     779 scope.go:117] "RemoveContainer" containerID="4c74d8cafc8ba306ec13a47045375ad7bda67a0b6040b076c92502e90a2fb40e"
	Oct 19 13:18:27 embed-certs-834340 kubelet[779]: I1019 13:18:27.327458     779 scope.go:117] "RemoveContainer" containerID="4c74d8cafc8ba306ec13a47045375ad7bda67a0b6040b076c92502e90a2fb40e"
	Oct 19 13:18:27 embed-certs-834340 kubelet[779]: I1019 13:18:27.327828     779 scope.go:117] "RemoveContainer" containerID="f2b22a7c199217cfb9c5c6c994f073ef81dd212c2d9eb9450de07cf8ab355502"
	Oct 19 13:18:27 embed-certs-834340 kubelet[779]: E1019 13:18:27.330943     779 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jkcd_kubernetes-dashboard(15b5ab5d-1dc4-4250-afe7-f70dda24b9a6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jkcd" podUID="15b5ab5d-1dc4-4250-afe7-f70dda24b9a6"
	Oct 19 13:18:28 embed-certs-834340 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 13:18:28 embed-certs-834340 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 13:18:28 embed-certs-834340 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [1c4acb28dc65c2cf95e3cd764af3122d1e0110b3c5d4eed9941f8e009ca9688f] <==
	2025/10/19 13:17:46 Using namespace: kubernetes-dashboard
	2025/10/19 13:17:46 Using in-cluster config to connect to apiserver
	2025/10/19 13:17:46 Using secret token for csrf signing
	2025/10/19 13:17:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 13:17:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 13:17:46 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 13:17:46 Generating JWE encryption key
	2025/10/19 13:17:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 13:17:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 13:17:46 Initializing JWE encryption key from synchronized object
	2025/10/19 13:17:46 Creating in-cluster Sidecar client
	2025/10/19 13:17:46 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 13:17:46 Serving insecurely on HTTP port: 9090
	2025/10/19 13:18:16 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 13:17:46 Starting overwatch
	
	
	==> storage-provisioner [001f191f75075fcbbccb52988562ecce0820f9a6c12edc5db65687f5b91128b8] <==
	I1019 13:18:07.355056       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 13:18:07.355102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 13:18:07.357743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:10.813327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:15.073577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:18.672441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:21.726944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:24.749055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:24.756460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 13:18:24.756678       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 13:18:24.759336       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-834340_3c67bc1e-c87b-473a-a5b8-8dfee52678ff!
	I1019 13:18:24.757090       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f1d54810-d394-48c2-ac3f-ee098575b9a6", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-834340_3c67bc1e-c87b-473a-a5b8-8dfee52678ff became leader
	W1019 13:18:24.761651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:24.771011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 13:18:24.859755       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-834340_3c67bc1e-c87b-473a-a5b8-8dfee52678ff!
	W1019 13:18:26.775649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:26.785003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:28.789654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:28.795023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:30.813900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:30.822030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:32.827273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:32.850335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:34.853956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:18:34.865465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [31231e1c742bdbc0a3dba61c64b968fd68a7bb9fa8d9ab32f58da69d755f6dcc] <==
	I1019 13:17:36.676820       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 13:18:06.679397       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-834340 -n embed-certs-834340
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-834340 -n embed-certs-834340: exit status 2 (573.843553ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-834340 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-895642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-895642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (268.023503ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:19:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-895642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-895642
helpers_test.go:243: (dbg) docker inspect newest-cni-895642:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91",
	        "Created": "2025-10-19T13:18:47.102094751Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500234,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T13:18:47.214878542Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91/hostname",
	        "HostsPath": "/var/lib/docker/containers/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91/hosts",
	        "LogPath": "/var/lib/docker/containers/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91-json.log",
	        "Name": "/newest-cni-895642",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-895642:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-895642",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91",
	                "LowerDir": "/var/lib/docker/overlay2/78a263d1d7086b8fb12930f09e9fe63d30f6fc9948d021e88738800232e60a99-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/78a263d1d7086b8fb12930f09e9fe63d30f6fc9948d021e88738800232e60a99/merged",
	                "UpperDir": "/var/lib/docker/overlay2/78a263d1d7086b8fb12930f09e9fe63d30f6fc9948d021e88738800232e60a99/diff",
	                "WorkDir": "/var/lib/docker/overlay2/78a263d1d7086b8fb12930f09e9fe63d30f6fc9948d021e88738800232e60a99/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-895642",
	                "Source": "/var/lib/docker/volumes/newest-cni-895642/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-895642",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-895642",
	                "name.minikube.sigs.k8s.io": "newest-cni-895642",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab64a323a59f4d83f1714809f05b88d782e7b54927545598da4204b5c96c13ae",
	            "SandboxKey": "/var/run/docker/netns/ab64a323a59f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-895642": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:2b:b4:39:be:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "584dee223ade6b07d2b96f7183f8063e011ff006f776b87c19f6da2971cc4a7f",
	                    "EndpointID": "5a52c20d99f0ae1ea56ecc9e3f104ac00fbc5484ecfc1cd7245aa41541817b9b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-895642",
	                        "caf0cfe00265"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895642 -n newest-cni-895642
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-895642 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-895642 logs -n 25: (1.240546219s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-842494 image list --format=json                                                                                                                                                                                               │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ pause   │ -p old-k8s-version-842494 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │                     │
	│ delete  │ -p old-k8s-version-842494                                                                                                                                                                                                                     │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ delete  │ -p old-k8s-version-842494                                                                                                                                                                                                                     │ old-k8s-version-842494       │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:15 UTC │
	│ start   │ -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:15 UTC │ 19 Oct 25 13:16 UTC │
	│ image   │ no-preload-108149 image list --format=json                                                                                                                                                                                                    │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ pause   │ -p no-preload-108149 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │                     │
	│ delete  │ -p no-preload-108149                                                                                                                                                                                                                          │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ delete  │ -p no-preload-108149                                                                                                                                                                                                                          │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ delete  │ -p disable-driver-mounts-418719                                                                                                                                                                                                               │ disable-driver-mounts-418719 │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ start   │ -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-834340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │                     │
	│ stop    │ -p embed-certs-834340 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-834340 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:17 UTC │
	│ start   │ -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-455348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-455348 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-455348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ start   │ -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │                     │
	│ image   │ embed-certs-834340 image list --format=json                                                                                                                                                                                                   │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ pause   │ -p embed-certs-834340 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │                     │
	│ delete  │ -p embed-certs-834340                                                                                                                                                                                                                         │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ delete  │ -p embed-certs-834340                                                                                                                                                                                                                         │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ start   │ -p newest-cni-895642 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-895642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 13:18:40
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 13:18:40.515262  499672 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:18:40.515490  499672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:18:40.515518  499672 out.go:374] Setting ErrFile to fd 2...
	I1019 13:18:40.515537  499672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:18:40.515858  499672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:18:40.516324  499672 out.go:368] Setting JSON to false
	I1019 13:18:40.517371  499672 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10871,"bootTime":1760869050,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 13:18:40.517461  499672 start.go:141] virtualization:  
	I1019 13:18:40.521059  499672 out.go:179] * [newest-cni-895642] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 13:18:40.526827  499672 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:18:40.526906  499672 notify.go:220] Checking for updates...
	I1019 13:18:40.532800  499672 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:18:40.536289  499672 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:18:40.539328  499672 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 13:18:40.542350  499672 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 13:18:40.545378  499672 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:18:40.548955  499672 config.go:182] Loaded profile config "default-k8s-diff-port-455348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:18:40.549133  499672 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:18:40.592822  499672 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 13:18:40.592926  499672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:18:40.664446  499672 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 13:18:40.655565298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:18:40.664551  499672 docker.go:318] overlay module found
	I1019 13:18:40.667787  499672 out.go:179] * Using the docker driver based on user configuration
	I1019 13:18:40.670725  499672 start.go:305] selected driver: docker
	I1019 13:18:40.670748  499672 start.go:925] validating driver "docker" against <nil>
	I1019 13:18:40.670763  499672 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:18:40.671564  499672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:18:40.725555  499672 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-19 13:18:40.716844164 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:18:40.725740  499672 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1019 13:18:40.725766  499672 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1019 13:18:40.726009  499672 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 13:18:40.728956  499672 out.go:179] * Using Docker driver with root privileges
	I1019 13:18:40.731774  499672 cni.go:84] Creating CNI manager for ""
	I1019 13:18:40.731843  499672 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:18:40.731863  499672 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 13:18:40.731941  499672 start.go:349] cluster config:
	{Name:newest-cni-895642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:18:40.736214  499672 out.go:179] * Starting "newest-cni-895642" primary control-plane node in "newest-cni-895642" cluster
	I1019 13:18:40.739107  499672 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 13:18:40.742041  499672 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 13:18:40.744792  499672 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:18:40.744848  499672 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 13:18:40.744860  499672 cache.go:58] Caching tarball of preloaded images
	I1019 13:18:40.744879  499672 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 13:18:40.744952  499672 preload.go:233] Found /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 13:18:40.744963  499672 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 13:18:40.745074  499672 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/config.json ...
	I1019 13:18:40.745090  499672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/config.json: {Name:mkecb59f925581f502edaf64eb82dd60b0f87121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:18:40.764231  499672 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 13:18:40.764253  499672 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 13:18:40.764272  499672 cache.go:232] Successfully downloaded all kic artifacts
	I1019 13:18:40.764299  499672 start.go:360] acquireMachinesLock for newest-cni-895642: {Name:mke5c4230882c7c86983f0da461147450e8e886d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:18:40.764439  499672 start.go:364] duration metric: took 118.615µs to acquireMachinesLock for "newest-cni-895642"
	I1019 13:18:40.764469  499672 start.go:93] Provisioning new machine with config: &{Name:newest-cni-895642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:18:40.764542  499672 start.go:125] createHost starting for "" (driver="docker")
	I1019 13:18:38.557768  496573 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1019 13:18:38.558015  496573 addons.go:514] duration metric: took 8.5918031s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1019 13:18:38.558867  496573 api_server.go:141] control plane version: v1.34.1
	I1019 13:18:38.558885  496573 api_server.go:131] duration metric: took 11.277413ms to wait for apiserver health ...
	I1019 13:18:38.558893  496573 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 13:18:38.571288  496573 system_pods.go:59] 8 kube-system pods found
	I1019 13:18:38.571329  496573 system_pods.go:61] "coredns-66bc5c9577-qn68x" [ec110a63-3a4a-4459-b52f-91f5bbc3040c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:18:38.571339  496573 system_pods.go:61] "etcd-default-k8s-diff-port-455348" [fbed1466-c3ec-408e-a585-1161333eb770] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:18:38.571347  496573 system_pods.go:61] "kindnet-m2tx2" [a29cf050-9838-4f87-b000-1bc588bc226e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 13:18:38.571353  496573 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-455348" [fbca8027-d2e0-47b0-9ec6-d34db77afb1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:18:38.571362  496573 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-455348" [72ef8b73-4a4e-471d-9f80-8b8c56b15148] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 13:18:38.571368  496573 system_pods.go:61] "kube-proxy-vbd99" [856b676a-25aa-48b5-ad14-043c61758179] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 13:18:38.571374  496573 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-455348" [cb0881af-73f5-43fe-a786-efb577036c6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 13:18:38.571380  496573 system_pods.go:61] "storage-provisioner" [dadf6eac-8768-45de-aea6-a3ca3f518c9d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 13:18:38.571387  496573 system_pods.go:74] duration metric: took 12.48799ms to wait for pod list to return data ...
	I1019 13:18:38.571395  496573 default_sa.go:34] waiting for default service account to be created ...
	I1019 13:18:38.579200  496573 default_sa.go:45] found service account: "default"
	I1019 13:18:38.579256  496573 default_sa.go:55] duration metric: took 7.854195ms for default service account to be created ...
	I1019 13:18:38.579281  496573 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 13:18:38.665903  496573 system_pods.go:86] 8 kube-system pods found
	I1019 13:18:38.665935  496573 system_pods.go:89] "coredns-66bc5c9577-qn68x" [ec110a63-3a4a-4459-b52f-91f5bbc3040c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 13:18:38.665945  496573 system_pods.go:89] "etcd-default-k8s-diff-port-455348" [fbed1466-c3ec-408e-a585-1161333eb770] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:18:38.665960  496573 system_pods.go:89] "kindnet-m2tx2" [a29cf050-9838-4f87-b000-1bc588bc226e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 13:18:38.665970  496573 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-455348" [fbca8027-d2e0-47b0-9ec6-d34db77afb1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:18:38.665977  496573 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-455348" [72ef8b73-4a4e-471d-9f80-8b8c56b15148] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 13:18:38.665984  496573 system_pods.go:89] "kube-proxy-vbd99" [856b676a-25aa-48b5-ad14-043c61758179] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 13:18:38.665990  496573 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-455348" [cb0881af-73f5-43fe-a786-efb577036c6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 13:18:38.666000  496573 system_pods.go:89] "storage-provisioner" [dadf6eac-8768-45de-aea6-a3ca3f518c9d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 13:18:38.666007  496573 system_pods.go:126] duration metric: took 86.707856ms to wait for k8s-apps to be running ...
	I1019 13:18:38.666016  496573 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 13:18:38.666073  496573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:18:38.701939  496573 system_svc.go:56] duration metric: took 35.907648ms WaitForService to wait for kubelet
	I1019 13:18:38.701963  496573 kubeadm.go:586] duration metric: took 8.736088742s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:18:38.701983  496573 node_conditions.go:102] verifying NodePressure condition ...
	I1019 13:18:38.712876  496573 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 13:18:38.712911  496573 node_conditions.go:123] node cpu capacity is 2
	I1019 13:18:38.712923  496573 node_conditions.go:105] duration metric: took 10.935296ms to run NodePressure ...
	I1019 13:18:38.712936  496573 start.go:241] waiting for startup goroutines ...
	I1019 13:18:38.712944  496573 start.go:246] waiting for cluster config update ...
	I1019 13:18:38.712955  496573 start.go:255] writing updated cluster config ...
	I1019 13:18:38.713338  496573 ssh_runner.go:195] Run: rm -f paused
	I1019 13:18:38.721311  496573 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 13:18:38.732569  496573 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qn68x" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 13:18:40.738711  496573 pod_ready.go:104] pod "coredns-66bc5c9577-qn68x" is not "Ready", error: <nil>
	I1019 13:18:40.767874  499672 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 13:18:40.768203  499672 start.go:159] libmachine.API.Create for "newest-cni-895642" (driver="docker")
	I1019 13:18:40.768263  499672 client.go:168] LocalClient.Create starting
	I1019 13:18:40.768358  499672 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem
	I1019 13:18:40.768395  499672 main.go:141] libmachine: Decoding PEM data...
	I1019 13:18:40.768426  499672 main.go:141] libmachine: Parsing certificate...
	I1019 13:18:40.768538  499672 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem
	I1019 13:18:40.768783  499672 main.go:141] libmachine: Decoding PEM data...
	I1019 13:18:40.768805  499672 main.go:141] libmachine: Parsing certificate...
	I1019 13:18:40.770355  499672 cli_runner.go:164] Run: docker network inspect newest-cni-895642 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 13:18:40.786453  499672 cli_runner.go:211] docker network inspect newest-cni-895642 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 13:18:40.786527  499672 network_create.go:284] running [docker network inspect newest-cni-895642] to gather additional debugging logs...
	I1019 13:18:40.786553  499672 cli_runner.go:164] Run: docker network inspect newest-cni-895642
	W1019 13:18:40.802181  499672 cli_runner.go:211] docker network inspect newest-cni-895642 returned with exit code 1
	I1019 13:18:40.802215  499672 network_create.go:287] error running [docker network inspect newest-cni-895642]: docker network inspect newest-cni-895642: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-895642 not found
	I1019 13:18:40.802230  499672 network_create.go:289] output of [docker network inspect newest-cni-895642]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-895642 not found
	
	** /stderr **
	I1019 13:18:40.802322  499672 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:18:40.819395  499672 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-319c97358c5c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2a:99:c3:44:12:51} reservation:<nil>}
	I1019 13:18:40.822063  499672 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5c09b33e0936 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fe:93:4b:f6:fd:1c} reservation:<nil>}
	I1019 13:18:40.822593  499672 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2c2bbaadd4a8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:7a:8f:96:27:48:2c} reservation:<nil>}
	I1019 13:18:40.822952  499672 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-feb5b6cb71ad IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6a:85:58:a8:0f:9a} reservation:<nil>}
	I1019 13:18:40.823397  499672 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2ac70}
	I1019 13:18:40.823442  499672 network_create.go:124] attempt to create docker network newest-cni-895642 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1019 13:18:40.823506  499672 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-895642 newest-cni-895642
	I1019 13:18:40.885050  499672 network_create.go:108] docker network newest-cni-895642 192.168.85.0/24 created
	I1019 13:18:40.885080  499672 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-895642" container
	I1019 13:18:40.885165  499672 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 13:18:40.901756  499672 cli_runner.go:164] Run: docker volume create newest-cni-895642 --label name.minikube.sigs.k8s.io=newest-cni-895642 --label created_by.minikube.sigs.k8s.io=true
	I1019 13:18:40.920702  499672 oci.go:103] Successfully created a docker volume newest-cni-895642
	I1019 13:18:40.920785  499672 cli_runner.go:164] Run: docker run --rm --name newest-cni-895642-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-895642 --entrypoint /usr/bin/test -v newest-cni-895642:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 13:18:41.461268  499672 oci.go:107] Successfully prepared a docker volume newest-cni-895642
	I1019 13:18:41.461322  499672 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:18:41.461341  499672 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 13:18:41.461414  499672 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-895642:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1019 13:18:42.746336  496573 pod_ready.go:104] pod "coredns-66bc5c9577-qn68x" is not "Ready", error: <nil>
	W1019 13:18:45.246255  496573 pod_ready.go:104] pod "coredns-66bc5c9577-qn68x" is not "Ready", error: <nil>
	I1019 13:18:46.972380  499672 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-895642:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.510927857s)
	I1019 13:18:46.972416  499672 kic.go:203] duration metric: took 5.511067962s to extract preloaded images to volume ...
	W1019 13:18:46.972545  499672 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1019 13:18:46.972657  499672 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 13:18:47.075696  499672 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-895642 --name newest-cni-895642 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-895642 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-895642 --network newest-cni-895642 --ip 192.168.85.2 --volume newest-cni-895642:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 13:18:47.548434  499672 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Running}}
	I1019 13:18:47.573434  499672 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:18:47.601550  499672 cli_runner.go:164] Run: docker exec newest-cni-895642 stat /var/lib/dpkg/alternatives/iptables
	I1019 13:18:47.670449  499672 oci.go:144] the created container "newest-cni-895642" has a running status.
	I1019 13:18:47.670482  499672 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa...
	I1019 13:18:48.257647  499672 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 13:18:48.278335  499672 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:18:48.301238  499672 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 13:18:48.301257  499672 kic_runner.go:114] Args: [docker exec --privileged newest-cni-895642 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 13:18:48.356466  499672 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:18:48.377311  499672 machine.go:93] provisionDockerMachine start ...
	I1019 13:18:48.377418  499672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:18:48.401837  499672 main.go:141] libmachine: Using SSH client type: native
	I1019 13:18:48.402220  499672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1019 13:18:48.402244  499672 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 13:18:48.402932  499672 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34242->127.0.0.1:33458: read: connection reset by peer
	W1019 13:18:47.750448  496573 pod_ready.go:104] pod "coredns-66bc5c9577-qn68x" is not "Ready", error: <nil>
	W1019 13:18:50.251388  496573 pod_ready.go:104] pod "coredns-66bc5c9577-qn68x" is not "Ready", error: <nil>
	I1019 13:18:51.574097  499672 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-895642
	
	I1019 13:18:51.574118  499672 ubuntu.go:182] provisioning hostname "newest-cni-895642"
	I1019 13:18:51.574199  499672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:18:51.604774  499672 main.go:141] libmachine: Using SSH client type: native
	I1019 13:18:51.605494  499672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1019 13:18:51.605537  499672 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-895642 && echo "newest-cni-895642" | sudo tee /etc/hostname
	I1019 13:18:51.788415  499672 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-895642
	
	I1019 13:18:51.788569  499672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:18:51.824451  499672 main.go:141] libmachine: Using SSH client type: native
	I1019 13:18:51.824753  499672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1019 13:18:51.824769  499672 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-895642' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-895642/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-895642' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 13:18:51.986269  499672 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 13:18:51.986298  499672 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 13:18:51.986317  499672 ubuntu.go:190] setting up certificates
	I1019 13:18:51.986359  499672 provision.go:84] configureAuth start
	I1019 13:18:51.986448  499672 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895642
	I1019 13:18:52.011236  499672 provision.go:143] copyHostCerts
	I1019 13:18:52.011326  499672 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem, removing ...
	I1019 13:18:52.011340  499672 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem
	I1019 13:18:52.011419  499672 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 13:18:52.011524  499672 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem, removing ...
	I1019 13:18:52.011535  499672 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem
	I1019 13:18:52.011563  499672 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 13:18:52.011653  499672 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem, removing ...
	I1019 13:18:52.011663  499672 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem
	I1019 13:18:52.011691  499672 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 13:18:52.011753  499672 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.newest-cni-895642 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-895642]
	I1019 13:18:52.138879  499672 provision.go:177] copyRemoteCerts
	I1019 13:18:52.139010  499672 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 13:18:52.139086  499672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:18:52.166098  499672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:18:52.275025  499672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 13:18:52.306049  499672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 13:18:52.325883  499672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 13:18:52.346071  499672 provision.go:87] duration metric: took 359.682445ms to configureAuth
	I1019 13:18:52.346100  499672 ubuntu.go:206] setting minikube options for container-runtime
	I1019 13:18:52.346330  499672 config.go:182] Loaded profile config "newest-cni-895642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:18:52.346484  499672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:18:52.369908  499672 main.go:141] libmachine: Using SSH client type: native
	I1019 13:18:52.370248  499672 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1019 13:18:52.370270  499672 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 13:18:52.709429  499672 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 13:18:52.709456  499672 machine.go:96] duration metric: took 4.332119054s to provisionDockerMachine
	I1019 13:18:52.709474  499672 client.go:171] duration metric: took 11.941202725s to LocalClient.Create
	I1019 13:18:52.709513  499672 start.go:167] duration metric: took 11.941311223s to libmachine.API.Create "newest-cni-895642"
	I1019 13:18:52.709527  499672 start.go:293] postStartSetup for "newest-cni-895642" (driver="docker")
	I1019 13:18:52.709554  499672 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 13:18:52.709645  499672 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 13:18:52.709726  499672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:18:52.728380  499672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:18:52.846916  499672 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 13:18:52.850726  499672 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 13:18:52.850759  499672 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 13:18:52.850770  499672 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/addons for local assets ...
	I1019 13:18:52.850819  499672 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/files for local assets ...
	I1019 13:18:52.850898  499672 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem -> 2945182.pem in /etc/ssl/certs
	I1019 13:18:52.851022  499672 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 13:18:52.859020  499672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:18:52.880013  499672 start.go:296] duration metric: took 170.469ms for postStartSetup
	I1019 13:18:52.880428  499672 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895642
	I1019 13:18:52.903547  499672 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/config.json ...
	I1019 13:18:52.903849  499672 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 13:18:52.903922  499672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:18:52.927482  499672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:18:53.031120  499672 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 13:18:53.036983  499672 start.go:128] duration metric: took 12.272426108s to createHost
	I1019 13:18:53.037009  499672 start.go:83] releasing machines lock for "newest-cni-895642", held for 12.2725579s
	I1019 13:18:53.037080  499672 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895642
	I1019 13:18:53.055431  499672 ssh_runner.go:195] Run: cat /version.json
	I1019 13:18:53.055526  499672 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 13:18:53.055574  499672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:18:53.055810  499672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:18:53.091300  499672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:18:53.098214  499672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:18:53.306331  499672 ssh_runner.go:195] Run: systemctl --version
	I1019 13:18:53.313052  499672 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 13:18:53.377586  499672 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 13:18:53.387367  499672 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 13:18:53.387478  499672 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 13:18:53.428841  499672 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1019 13:18:53.428894  499672 start.go:495] detecting cgroup driver to use...
	I1019 13:18:53.428928  499672 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 13:18:53.428992  499672 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 13:18:53.453113  499672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 13:18:53.470946  499672 docker.go:218] disabling cri-docker service (if available) ...
	I1019 13:18:53.471029  499672 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 13:18:53.493800  499672 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 13:18:53.521093  499672 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 13:18:53.689345  499672 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 13:18:53.891996  499672 docker.go:234] disabling docker service ...
	I1019 13:18:53.892082  499672 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 13:18:53.930659  499672 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 13:18:53.948130  499672 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 13:18:54.112201  499672 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 13:18:54.275962  499672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 13:18:54.295616  499672 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 13:18:54.311523  499672 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 13:18:54.311665  499672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:54.323504  499672 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 13:18:54.323623  499672 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:54.332761  499672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:54.342305  499672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:54.357398  499672 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 13:18:54.367134  499672 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:54.385780  499672 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:54.416585  499672 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:18:54.440592  499672 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 13:18:54.452601  499672 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 13:18:54.465813  499672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:18:54.624313  499672 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 13:18:55.158930  499672 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 13:18:55.159074  499672 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 13:18:55.164284  499672 start.go:563] Will wait 60s for crictl version
	I1019 13:18:55.164439  499672 ssh_runner.go:195] Run: which crictl
	I1019 13:18:55.169589  499672 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 13:18:55.211373  499672 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 13:18:55.211526  499672 ssh_runner.go:195] Run: crio --version
	I1019 13:18:55.252127  499672 ssh_runner.go:195] Run: crio --version
	I1019 13:18:55.295385  499672 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 13:18:55.298500  499672 cli_runner.go:164] Run: docker network inspect newest-cni-895642 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:18:55.314727  499672 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 13:18:55.319701  499672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:18:55.333591  499672 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1019 13:18:55.336744  499672 kubeadm.go:883] updating cluster {Name:newest-cni-895642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 13:18:55.336881  499672 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:18:55.336991  499672 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:18:55.384142  499672 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:18:55.384168  499672 crio.go:433] Images already preloaded, skipping extraction
	I1019 13:18:55.384220  499672 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:18:55.414112  499672 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:18:55.414139  499672 cache_images.go:85] Images are preloaded, skipping loading
	I1019 13:18:55.414148  499672 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1019 13:18:55.414241  499672 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-895642 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 13:18:55.414331  499672 ssh_runner.go:195] Run: crio config
	I1019 13:18:55.503923  499672 cni.go:84] Creating CNI manager for ""
	I1019 13:18:55.503954  499672 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:18:55.503974  499672 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1019 13:18:55.504012  499672 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-895642 NodeName:newest-cni-895642 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 13:18:55.504173  499672 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-895642"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 13:18:55.504255  499672 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 13:18:55.512852  499672 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 13:18:55.512934  499672 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	W1019 13:18:52.744835  496573 pod_ready.go:104] pod "coredns-66bc5c9577-qn68x" is not "Ready", error: <nil>
	W1019 13:18:55.240757  496573 pod_ready.go:104] pod "coredns-66bc5c9577-qn68x" is not "Ready", error: <nil>
	I1019 13:18:55.522883  499672 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 13:18:55.535909  499672 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 13:18:55.549082  499672 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1019 13:18:55.562894  499672 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 13:18:55.567354  499672 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:18:55.579137  499672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:18:55.734342  499672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:18:55.761823  499672 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642 for IP: 192.168.85.2
	I1019 13:18:55.761846  499672 certs.go:195] generating shared ca certs ...
	I1019 13:18:55.761862  499672 certs.go:227] acquiring lock for ca certs: {Name:mk8f2f1c683cf5104ef70f6f3d59bf8f6240d633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:18:55.762055  499672 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key
	I1019 13:18:55.762461  499672 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key
	I1019 13:18:55.768855  499672 certs.go:257] generating profile certs ...
	I1019 13:18:55.768980  499672 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/client.key
	I1019 13:18:55.769002  499672 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/client.crt with IP's: []
	I1019 13:18:56.394305  499672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/client.crt ...
	I1019 13:18:56.394340  499672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/client.crt: {Name:mkf61648e1707cc6a3fe933c02988ef4b5160df5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:18:56.394543  499672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/client.key ...
	I1019 13:18:56.394557  499672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/client.key: {Name:mk964cc7cf697acd9b71a461e2302722d72b9831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:18:56.394653  499672 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.key.d4125fb8
	I1019 13:18:56.394670  499672 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.crt.d4125fb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1019 13:18:56.795028  499672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.crt.d4125fb8 ...
	I1019 13:18:56.795063  499672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.crt.d4125fb8: {Name:mk8add13c0b5184982417421cd67063b2b96f3df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:18:56.795254  499672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.key.d4125fb8 ...
	I1019 13:18:56.795270  499672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.key.d4125fb8: {Name:mkcdb1f8beed089fe399bcda873998396291a603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:18:56.795367  499672 certs.go:382] copying /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.crt.d4125fb8 -> /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.crt
	I1019 13:18:56.795453  499672 certs.go:386] copying /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.key.d4125fb8 -> /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.key
	I1019 13:18:56.795515  499672 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/proxy-client.key
	I1019 13:18:56.795536  499672 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/proxy-client.crt with IP's: []
	I1019 13:18:57.683544  499672 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/proxy-client.crt ...
	I1019 13:18:57.683581  499672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/proxy-client.crt: {Name:mk421223bc236e92658c4f4fd09c007dcd495f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:18:57.683775  499672 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/proxy-client.key ...
	I1019 13:18:57.683791  499672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/proxy-client.key: {Name:mka364af14afdfbd3371d0dabdc6a85804e92be1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:18:57.683989  499672 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem (1338 bytes)
	W1019 13:18:57.684037  499672 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518_empty.pem, impossibly tiny 0 bytes
	I1019 13:18:57.684046  499672 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 13:18:57.684105  499672 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem (1082 bytes)
	I1019 13:18:57.684133  499672 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem (1123 bytes)
	I1019 13:18:57.684154  499672 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem (1679 bytes)
	I1019 13:18:57.684196  499672 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:18:57.684804  499672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 13:18:57.705580  499672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 13:18:57.726267  499672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 13:18:57.749856  499672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 13:18:57.767163  499672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 13:18:57.785166  499672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 13:18:57.803319  499672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 13:18:57.822036  499672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 13:18:57.848329  499672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /usr/share/ca-certificates/2945182.pem (1708 bytes)
	I1019 13:18:57.865877  499672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 13:18:57.884096  499672 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem --> /usr/share/ca-certificates/294518.pem (1338 bytes)
	I1019 13:18:57.902391  499672 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 13:18:57.915528  499672 ssh_runner.go:195] Run: openssl version
	I1019 13:18:57.922086  499672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294518.pem && ln -fs /usr/share/ca-certificates/294518.pem /etc/ssl/certs/294518.pem"
	I1019 13:18:57.930278  499672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294518.pem
	I1019 13:18:57.934014  499672 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:20 /usr/share/ca-certificates/294518.pem
	I1019 13:18:57.934098  499672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294518.pem
	I1019 13:18:57.975134  499672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294518.pem /etc/ssl/certs/51391683.0"
	I1019 13:18:57.983388  499672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2945182.pem && ln -fs /usr/share/ca-certificates/2945182.pem /etc/ssl/certs/2945182.pem"
	I1019 13:18:57.991325  499672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2945182.pem
	I1019 13:18:57.995184  499672 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:20 /usr/share/ca-certificates/2945182.pem
	I1019 13:18:57.995251  499672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2945182.pem
	I1019 13:18:58.037030  499672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2945182.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 13:18:58.045838  499672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 13:18:58.054152  499672 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:18:58.057975  499672 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:18:58.058043  499672 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:18:58.099127  499672 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 13:18:58.107378  499672 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 13:18:58.110749  499672 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 13:18:58.110801  499672 kubeadm.go:400] StartCluster: {Name:newest-cni-895642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:18:58.110888  499672 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 13:18:58.110945  499672 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 13:18:58.145042  499672 cri.go:89] found id: ""
	I1019 13:18:58.145125  499672 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 13:18:58.153138  499672 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 13:18:58.160635  499672 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1019 13:18:58.160706  499672 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 13:18:58.168280  499672 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 13:18:58.168300  499672 kubeadm.go:157] found existing configuration files:
	
	I1019 13:18:58.168358  499672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 13:18:58.175980  499672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 13:18:58.176105  499672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 13:18:58.183973  499672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 13:18:58.193337  499672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 13:18:58.193458  499672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 13:18:58.202758  499672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 13:18:58.210993  499672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 13:18:58.211103  499672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 13:18:58.220145  499672 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 13:18:58.229140  499672 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 13:18:58.229253  499672 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 13:18:58.249137  499672 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 13:18:58.294412  499672 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1019 13:18:58.294688  499672 kubeadm.go:318] [preflight] Running pre-flight checks
	I1019 13:18:58.320782  499672 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1019 13:18:58.320901  499672 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1019 13:18:58.320965  499672 kubeadm.go:318] OS: Linux
	I1019 13:18:58.321034  499672 kubeadm.go:318] CGROUPS_CPU: enabled
	I1019 13:18:58.321113  499672 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1019 13:18:58.321180  499672 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1019 13:18:58.321257  499672 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1019 13:18:58.321329  499672 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1019 13:18:58.321405  499672 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1019 13:18:58.321476  499672 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1019 13:18:58.321554  499672 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1019 13:18:58.321626  499672 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1019 13:18:58.388530  499672 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 13:18:58.388705  499672 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 13:18:58.388809  499672 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 13:18:58.396006  499672 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 13:18:58.401943  499672 out.go:252]   - Generating certificates and keys ...
	I1019 13:18:58.402105  499672 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1019 13:18:58.402208  499672 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1019 13:18:58.512576  499672 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 13:18:59.143486  499672 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1019 13:19:00.087636  499672 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1019 13:19:00.361017  499672 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	W1019 13:18:57.737491  496573 pod_ready.go:104] pod "coredns-66bc5c9577-qn68x" is not "Ready", error: <nil>
	W1019 13:18:59.742212  496573 pod_ready.go:104] pod "coredns-66bc5c9577-qn68x" is not "Ready", error: <nil>
	I1019 13:19:00.840486  499672 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1019 13:19:00.840810  499672 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-895642] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 13:19:01.015012  499672 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1019 13:19:01.015153  499672 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-895642] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1019 13:19:01.291296  499672 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 13:19:01.675874  499672 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 13:19:02.227762  499672 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1019 13:19:02.227922  499672 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 13:19:02.480621  499672 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 13:19:02.936764  499672 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 13:19:03.725576  499672 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 13:19:04.009977  499672 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 13:19:04.544135  499672 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 13:19:04.544909  499672 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 13:19:04.547661  499672 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 13:19:04.551230  499672 out.go:252]   - Booting up control plane ...
	I1019 13:19:04.551332  499672 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 13:19:04.551409  499672 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 13:19:04.551475  499672 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 13:19:04.568636  499672 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 13:19:04.568973  499672 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 13:19:04.577598  499672 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 13:19:04.577968  499672 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 13:19:04.578210  499672 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1019 13:19:04.701763  499672 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 13:19:04.701890  499672 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1019 13:19:02.238841  496573 pod_ready.go:104] pod "coredns-66bc5c9577-qn68x" is not "Ready", error: <nil>
	W1019 13:19:04.239236  496573 pod_ready.go:104] pod "coredns-66bc5c9577-qn68x" is not "Ready", error: <nil>
	W1019 13:19:06.247990  496573 pod_ready.go:104] pod "coredns-66bc5c9577-qn68x" is not "Ready", error: <nil>
	I1019 13:19:06.701044  499672 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.00094028s
	I1019 13:19:06.704622  499672 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 13:19:06.704718  499672 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1019 13:19:06.704815  499672 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 13:19:06.704922  499672 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 13:19:09.718392  499672 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.012901393s
	W1019 13:19:08.738563  496573 pod_ready.go:104] pod "coredns-66bc5c9577-qn68x" is not "Ready", error: <nil>
	W1019 13:19:10.739025  496573 pod_ready.go:104] pod "coredns-66bc5c9577-qn68x" is not "Ready", error: <nil>
	I1019 13:19:12.185789  499672 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.481095373s
	I1019 13:19:13.206998  499672 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502254626s
	I1019 13:19:13.228984  499672 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 13:19:13.250237  499672 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 13:19:13.267184  499672 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 13:19:13.267388  499672 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-895642 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 13:19:13.291861  499672 kubeadm.go:318] [bootstrap-token] Using token: b2xr8l.8ga13xbahzek9u60
	I1019 13:19:13.294855  499672 out.go:252]   - Configuring RBAC rules ...
	I1019 13:19:13.294983  499672 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 13:19:13.303182  499672 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 13:19:13.311821  499672 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 13:19:13.318413  499672 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 13:19:13.322999  499672 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 13:19:13.328062  499672 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 13:19:13.613937  499672 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 13:19:14.054981  499672 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1019 13:19:14.614599  499672 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1019 13:19:14.615601  499672 kubeadm.go:318] 
	I1019 13:19:14.615690  499672 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1019 13:19:14.615698  499672 kubeadm.go:318] 
	I1019 13:19:14.615789  499672 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1019 13:19:14.615795  499672 kubeadm.go:318] 
	I1019 13:19:14.615825  499672 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1019 13:19:14.615906  499672 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 13:19:14.616002  499672 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 13:19:14.616008  499672 kubeadm.go:318] 
	I1019 13:19:14.616072  499672 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1019 13:19:14.616079  499672 kubeadm.go:318] 
	I1019 13:19:14.616129  499672 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 13:19:14.616134  499672 kubeadm.go:318] 
	I1019 13:19:14.616189  499672 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1019 13:19:14.616267  499672 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 13:19:14.616352  499672 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 13:19:14.616357  499672 kubeadm.go:318] 
	I1019 13:19:14.616462  499672 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 13:19:14.616548  499672 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1019 13:19:14.616558  499672 kubeadm.go:318] 
	I1019 13:19:14.616670  499672 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token b2xr8l.8ga13xbahzek9u60 \
	I1019 13:19:14.616792  499672 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0ee0bbb0fbfe8419c71683408bd38502dbf921f3cb179cb0365daeb92f967309 \
	I1019 13:19:14.616820  499672 kubeadm.go:318] 	--control-plane 
	I1019 13:19:14.616825  499672 kubeadm.go:318] 
	I1019 13:19:14.616914  499672 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1019 13:19:14.616919  499672 kubeadm.go:318] 
	I1019 13:19:14.617015  499672 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token b2xr8l.8ga13xbahzek9u60 \
	I1019 13:19:14.617121  499672 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0ee0bbb0fbfe8419c71683408bd38502dbf921f3cb179cb0365daeb92f967309 
	I1019 13:19:14.621813  499672 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1019 13:19:14.622065  499672 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1019 13:19:14.622183  499672 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 13:19:14.622208  499672 cni.go:84] Creating CNI manager for ""
	I1019 13:19:14.622218  499672 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:19:14.627140  499672 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 13:19:14.630105  499672 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 13:19:14.634357  499672 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 13:19:14.634379  499672 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 13:19:14.647771  499672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 13:19:14.972051  499672 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 13:19:14.972201  499672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:19:14.972280  499672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-895642 minikube.k8s.io/updated_at=2025_10_19T13_19_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99 minikube.k8s.io/name=newest-cni-895642 minikube.k8s.io/primary=true
	I1019 13:19:15.190013  499672 ops.go:34] apiserver oom_adj: -16
	I1019 13:19:15.190120  499672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1019 13:19:13.238983  496573 pod_ready.go:104] pod "coredns-66bc5c9577-qn68x" is not "Ready", error: <nil>
	W1019 13:19:15.739315  496573 pod_ready.go:104] pod "coredns-66bc5c9577-qn68x" is not "Ready", error: <nil>
	I1019 13:19:15.690586  499672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:19:16.190350  499672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:19:16.690381  499672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:19:17.190247  499672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:19:17.690637  499672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:19:18.190811  499672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:19:18.690407  499672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:19:19.190225  499672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 13:19:19.406373  499672 kubeadm.go:1113] duration metric: took 4.434218599s to wait for elevateKubeSystemPrivileges
	I1019 13:19:19.406406  499672 kubeadm.go:402] duration metric: took 21.295608382s to StartCluster
	I1019 13:19:19.406423  499672 settings.go:142] acquiring lock: {Name:mk1099ab6cbf86eca031b5f8e2b43952c9c0f84f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:19:19.406484  499672 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:19:19.407451  499672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:19:19.407681  499672 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:19:19.407825  499672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 13:19:19.408111  499672 config.go:182] Loaded profile config "newest-cni-895642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:19:19.408161  499672 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 13:19:19.408228  499672 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-895642"
	I1019 13:19:19.408248  499672 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-895642"
	I1019 13:19:19.408283  499672 host.go:66] Checking if "newest-cni-895642" exists ...
	I1019 13:19:19.408779  499672 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:19.409208  499672 addons.go:69] Setting default-storageclass=true in profile "newest-cni-895642"
	I1019 13:19:19.409233  499672 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-895642"
	I1019 13:19:19.409520  499672 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:19.416173  499672 out.go:179] * Verifying Kubernetes components...
	I1019 13:19:19.423464  499672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:19:19.444675  499672 addons.go:238] Setting addon default-storageclass=true in "newest-cni-895642"
	I1019 13:19:19.444719  499672 host.go:66] Checking if "newest-cni-895642" exists ...
	I1019 13:19:19.445132  499672 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:19.468422  499672 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 13:19:19.471594  499672 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:19:19.471619  499672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 13:19:19.472136  499672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:19.480097  499672 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 13:19:19.480120  499672 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 13:19:19.480190  499672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:19.521853  499672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:19.529262  499672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:19.704010  499672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 13:19:19.749597  499672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:19:19.778705  499672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:19:19.943039  499672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 13:19:20.446452  499672 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1019 13:19:20.447457  499672 api_server.go:52] waiting for apiserver process to appear ...
	I1019 13:19:20.448608  499672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 13:19:20.620053  499672 api_server.go:72] duration metric: took 1.212335689s to wait for apiserver process to appear ...
	I1019 13:19:20.620073  499672 api_server.go:88] waiting for apiserver healthz status ...
	I1019 13:19:20.620092  499672 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 13:19:20.633650  499672 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1019 13:19:20.634959  499672 api_server.go:141] control plane version: v1.34.1
	I1019 13:19:20.635023  499672 api_server.go:131] duration metric: took 14.942925ms to wait for apiserver health ...
	I1019 13:19:20.635048  499672 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 13:19:20.644734  499672 system_pods.go:59] 8 kube-system pods found
	I1019 13:19:20.644771  499672 system_pods.go:61] "coredns-66bc5c9577-gbtfz" [5f13f614-c060-4f18-90ea-149a9ddd78c3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 13:19:20.644780  499672 system_pods.go:61] "etcd-newest-cni-895642" [ddf46703-f963-42c8-b02e-db35d858825b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:19:20.644787  499672 system_pods.go:61] "kindnet-wtcgs" [348e9181-c940-4d5f-b47a-562fbdd88f99] Running
	I1019 13:19:20.644793  499672 system_pods.go:61] "kube-apiserver-newest-cni-895642" [320e873e-5b32-42b4-ab87-be63b052dd3b] Running
	I1019 13:19:20.644801  499672 system_pods.go:61] "kube-controller-manager-newest-cni-895642" [eb67514c-1127-4953-aaa0-e0b02b9a5c38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 13:19:20.644811  499672 system_pods.go:61] "kube-proxy-f8v8j" [4ce496c6-376a-47a7-adb5-90a20dfe8e09] Running
	I1019 13:19:20.644818  499672 system_pods.go:61] "kube-scheduler-newest-cni-895642" [981bd088-aa41-45c6-8263-995758c40371] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 13:19:20.644828  499672 system_pods.go:61] "storage-provisioner" [67bebe62-06cb-4eca-916e-d2799b856c75] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 13:19:20.644840  499672 system_pods.go:74] duration metric: took 9.772786ms to wait for pod list to return data ...
	I1019 13:19:20.644849  499672 default_sa.go:34] waiting for default service account to be created ...
	I1019 13:19:20.647136  499672 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 13:19:20.647933  499672 default_sa.go:45] found service account: "default"
	I1019 13:19:20.647984  499672 default_sa.go:55] duration metric: took 3.125663ms for default service account to be created ...
	I1019 13:19:20.647996  499672 kubeadm.go:586] duration metric: took 1.240282196s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 13:19:20.648017  499672 node_conditions.go:102] verifying NodePressure condition ...
	I1019 13:19:20.650465  499672 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 13:19:20.650495  499672 node_conditions.go:123] node cpu capacity is 2
	I1019 13:19:20.650509  499672 node_conditions.go:105] duration metric: took 2.486459ms to run NodePressure ...
	I1019 13:19:20.650521  499672 start.go:241] waiting for startup goroutines ...
	I1019 13:19:20.650561  499672 addons.go:514] duration metric: took 1.242393068s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 13:19:20.951396  499672 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-895642" context rescaled to 1 replicas
	I1019 13:19:20.951448  499672 start.go:246] waiting for cluster config update ...
	I1019 13:19:20.951462  499672 start.go:255] writing updated cluster config ...
	I1019 13:19:20.951783  499672 ssh_runner.go:195] Run: rm -f paused
	I1019 13:19:21.014076  499672 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 13:19:21.017357  499672 out.go:179] * Done! kubectl is now configured to use "newest-cni-895642" cluster and "default" namespace by default
	W1019 13:19:18.239013  496573 pod_ready.go:104] pod "coredns-66bc5c9577-qn68x" is not "Ready", error: <nil>
	I1019 13:19:20.738679  496573 pod_ready.go:94] pod "coredns-66bc5c9577-qn68x" is "Ready"
	I1019 13:19:20.738707  496573 pod_ready.go:86] duration metric: took 42.006103673s for pod "coredns-66bc5c9577-qn68x" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:19:20.741597  496573 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-455348" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:19:20.746336  496573 pod_ready.go:94] pod "etcd-default-k8s-diff-port-455348" is "Ready"
	I1019 13:19:20.746371  496573 pod_ready.go:86] duration metric: took 4.750235ms for pod "etcd-default-k8s-diff-port-455348" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:19:20.748728  496573 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-455348" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:19:20.753343  496573 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-455348" is "Ready"
	I1019 13:19:20.753408  496573 pod_ready.go:86] duration metric: took 4.654571ms for pod "kube-apiserver-default-k8s-diff-port-455348" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:19:20.755705  496573 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-455348" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:19:20.937578  496573 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-455348" is "Ready"
	I1019 13:19:20.937609  496573 pod_ready.go:86] duration metric: took 181.876201ms for pod "kube-controller-manager-default-k8s-diff-port-455348" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:19:21.136929  496573 pod_ready.go:83] waiting for pod "kube-proxy-vbd99" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 13:19:21.536819  496573 pod_ready.go:94] pod "kube-proxy-vbd99" is "Ready"
	I1019 13:19:21.536848  496573 pod_ready.go:86] duration metric: took 399.890922ms for pod "kube-proxy-vbd99" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.122295905Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.130965463Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6583f494-480a-4bed-8f51-f81b091efbae name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.134036119Z" level=info msg="Ran pod sandbox 75a6de1a8d79e49acad388c6c040f1012a4119a3145f044a5eb4d282a168d173 with infra container: kube-system/kube-proxy-f8v8j/POD" id=6583f494-480a-4bed-8f51-f81b091efbae name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.142007436Z" level=info msg="Running pod sandbox: kube-system/kindnet-wtcgs/POD" id=75ca56ff-cfe7-4869-b51f-b5eddf58de21 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.142066358Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.145333848Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=75ca56ff-cfe7-4869-b51f-b5eddf58de21 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.151053828Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=61f1c8be-b615-4663-81c3-6d1890b56e86 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.152331236Z" level=info msg="Ran pod sandbox 6ea8a0ed3d941e92a7805e6bfac18108f806af8f40dbe8f717e323bdd6416ebf with infra container: kube-system/kindnet-wtcgs/POD" id=75ca56ff-cfe7-4869-b51f-b5eddf58de21 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.153381154Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=21b05270-65fc-4105-ac1e-4ccb9d6583b1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.155507222Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e805da96-b6cc-401f-8a4d-5b9a83f01016 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.159471896Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=22e6aa6b-ca87-45dc-b5cc-ba5e912ba7fc name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.165265567Z" level=info msg="Creating container: kube-system/kindnet-wtcgs/kindnet-cni" id=0fc4d867-3aa3-4cfd-bbaa-126ad4eb477e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.165580935Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.168861596Z" level=info msg="Creating container: kube-system/kube-proxy-f8v8j/kube-proxy" id=3aca51cb-209e-466f-af1e-8220088951be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.171437623Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.17765205Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.181900609Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.18274501Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.182844269Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.219120875Z" level=info msg="Created container 7f828f61e3c14095402d40e79e1bd06d182caeee307d36b0083272949dce5f56: kube-system/kindnet-wtcgs/kindnet-cni" id=0fc4d867-3aa3-4cfd-bbaa-126ad4eb477e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.220638951Z" level=info msg="Starting container: 7f828f61e3c14095402d40e79e1bd06d182caeee307d36b0083272949dce5f56" id=5ae128b7-5ec3-4e45-8c8d-56241609899f name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.230728591Z" level=info msg="Started container" PID=1417 containerID=7f828f61e3c14095402d40e79e1bd06d182caeee307d36b0083272949dce5f56 description=kube-system/kindnet-wtcgs/kindnet-cni id=5ae128b7-5ec3-4e45-8c8d-56241609899f name=/runtime.v1.RuntimeService/StartContainer sandboxID=6ea8a0ed3d941e92a7805e6bfac18108f806af8f40dbe8f717e323bdd6416ebf
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.252728325Z" level=info msg="Created container 3e6e6e6f51276493cb626d1478ba462ac3c052bacec85f0ded196c087150572a: kube-system/kube-proxy-f8v8j/kube-proxy" id=3aca51cb-209e-466f-af1e-8220088951be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.254519972Z" level=info msg="Starting container: 3e6e6e6f51276493cb626d1478ba462ac3c052bacec85f0ded196c087150572a" id=867caea1-85c4-47f0-89f8-2fd67ec7e5a0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:19:19 newest-cni-895642 crio[845]: time="2025-10-19T13:19:19.257625779Z" level=info msg="Started container" PID=1426 containerID=3e6e6e6f51276493cb626d1478ba462ac3c052bacec85f0ded196c087150572a description=kube-system/kube-proxy-f8v8j/kube-proxy id=867caea1-85c4-47f0-89f8-2fd67ec7e5a0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=75a6de1a8d79e49acad388c6c040f1012a4119a3145f044a5eb4d282a168d173
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3e6e6e6f51276       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   3 seconds ago       Running             kube-proxy                0                   75a6de1a8d79e       kube-proxy-f8v8j                            kube-system
	7f828f61e3c14       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   3 seconds ago       Running             kindnet-cni               0                   6ea8a0ed3d941       kindnet-wtcgs                               kube-system
	2febdcd938669       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            0                   5e16f57b7d8d6       kube-scheduler-newest-cni-895642            kube-system
	d6c056a580e3e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      0                   1295500ee3d9d       etcd-newest-cni-895642                      kube-system
	90e9b53d68ba0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            0                   c1614a73fa324       kube-apiserver-newest-cni-895642            kube-system
	5afe3eaa39e6f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   161a55b78f54f       kube-controller-manager-newest-cni-895642   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-895642
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-895642
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=newest-cni-895642
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T13_19_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 13:19:11 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-895642
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 13:19:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 13:19:14 +0000   Sun, 19 Oct 2025 13:19:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 13:19:14 +0000   Sun, 19 Oct 2025 13:19:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 13:19:14 +0000   Sun, 19 Oct 2025 13:19:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 19 Oct 2025 13:19:14 +0000   Sun, 19 Oct 2025 13:19:07 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-895642
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                df9d6668-401f-4ce8-aa0c-269b36d9790d
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-895642                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10s
	  kube-system                 kindnet-wtcgs                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-895642             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-895642    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-f8v8j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-895642             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  16s (x8 over 16s)  kubelet          Node newest-cni-895642 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s (x8 over 16s)  kubelet          Node newest-cni-895642 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s (x8 over 16s)  kubelet          Node newest-cni-895642 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-895642 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-895642 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s                 kubelet          Node newest-cni-895642 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-895642 event: Registered Node newest-cni-895642 in Controller
	
	
	==> dmesg <==
	[ +16.315179] overlayfs: idmapped layers are currently not supported
	[ +11.914063] overlayfs: idmapped layers are currently not supported
	[Oct19 12:57] overlayfs: idmapped layers are currently not supported
	[Oct19 12:58] overlayfs: idmapped layers are currently not supported
	[ +48.481184] overlayfs: idmapped layers are currently not supported
	[Oct19 12:59] overlayfs: idmapped layers are currently not supported
	[Oct19 13:00] overlayfs: idmapped layers are currently not supported
	[Oct19 13:01] overlayfs: idmapped layers are currently not supported
	[Oct19 13:04] overlayfs: idmapped layers are currently not supported
	[Oct19 13:05] overlayfs: idmapped layers are currently not supported
	[Oct19 13:06] overlayfs: idmapped layers are currently not supported
	[Oct19 13:08] overlayfs: idmapped layers are currently not supported
	[ +38.759554] overlayfs: idmapped layers are currently not supported
	[Oct19 13:10] overlayfs: idmapped layers are currently not supported
	[Oct19 13:11] overlayfs: idmapped layers are currently not supported
	[Oct19 13:12] overlayfs: idmapped layers are currently not supported
	[ +39.991818] overlayfs: idmapped layers are currently not supported
	[Oct19 13:13] overlayfs: idmapped layers are currently not supported
	[Oct19 13:14] overlayfs: idmapped layers are currently not supported
	[Oct19 13:15] overlayfs: idmapped layers are currently not supported
	[ +34.413925] overlayfs: idmapped layers are currently not supported
	[Oct19 13:17] overlayfs: idmapped layers are currently not supported
	[ +27.716246] overlayfs: idmapped layers are currently not supported
	[Oct19 13:18] overlayfs: idmapped layers are currently not supported
	[Oct19 13:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d6c056a580e3e0ce117cadb3ddfd01fad76c477e40f5d71e7e2a10b4e304639b] <==
	{"level":"warn","ts":"2025-10-19T13:19:09.203212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.230176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.279666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.321939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.338108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.366869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.397353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.427311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.445829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.482556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.527317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.589819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.623674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.688222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.740314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.754873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.771673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.780494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.797794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.813956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.842122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.878264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.897003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:09.918275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:10.021195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42914","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:19:22 up  3:01,  0 user,  load average: 5.29, 3.87, 3.06
	Linux newest-cni-895642 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7f828f61e3c14095402d40e79e1bd06d182caeee307d36b0083272949dce5f56] <==
	I1019 13:19:19.331678       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:19:19.332115       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 13:19:19.332294       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:19:19.332341       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:19:19.332379       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:19:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:19:19.535844       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:19:19.536235       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:19:19.536516       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:19:19.538979       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [90e9b53d68ba0f9a7f633435fb9670e1ee730bda504a19cc8aba0859a2193445] <==
	I1019 13:19:11.222876       1 policy_source.go:240] refreshing policies
	I1019 13:19:11.270710       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 13:19:11.303550       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:19:11.305897       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1019 13:19:11.318276       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1019 13:19:11.329999       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 13:19:11.335225       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:19:11.429044       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 13:19:11.773129       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 13:19:11.785059       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 13:19:11.785200       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 13:19:12.634554       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 13:19:12.730729       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 13:19:12.875729       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 13:19:12.886024       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1019 13:19:12.887248       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 13:19:12.892257       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 13:19:13.066721       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 13:19:14.031669       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 13:19:14.053204       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 13:19:14.065041       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 13:19:18.323003       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:19:18.328622       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:19:18.776429       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1019 13:19:19.071843       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [5afe3eaa39e6fe1ff6aaf3627f67470622349603a3460879a0a76b3d53702b54] <==
	I1019 13:19:18.066941       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 13:19:18.066964       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 13:19:18.069279       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 13:19:18.069349       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:19:18.071965       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 13:19:18.074458       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 13:19:18.076733       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 13:19:18.084168       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 13:19:18.092578       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:19:18.095770       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 13:19:18.105026       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 13:19:18.113997       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 13:19:18.114333       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 13:19:18.114499       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-895642"
	I1019 13:19:18.114594       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 13:19:18.115087       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 13:19:18.115772       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 13:19:18.115949       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 13:19:18.115872       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 13:19:18.115882       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 13:19:18.115858       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 13:19:18.116925       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 13:19:18.120668       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 13:19:18.120902       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 13:19:18.123992       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	
	
	==> kube-proxy [3e6e6e6f51276493cb626d1478ba462ac3c052bacec85f0ded196c087150572a] <==
	I1019 13:19:19.398195       1 server_linux.go:53] "Using iptables proxy"
	I1019 13:19:19.587421       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 13:19:19.688510       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 13:19:19.688544       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 13:19:19.688620       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 13:19:19.744652       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:19:19.744701       1 server_linux.go:132] "Using iptables Proxier"
	I1019 13:19:19.752830       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 13:19:19.755316       1 server.go:527] "Version info" version="v1.34.1"
	I1019 13:19:19.755346       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:19:19.760662       1 config.go:200] "Starting service config controller"
	I1019 13:19:19.760684       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 13:19:19.760699       1 config.go:106] "Starting endpoint slice config controller"
	I1019 13:19:19.760703       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 13:19:19.760715       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 13:19:19.760719       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 13:19:19.770235       1 config.go:309] "Starting node config controller"
	I1019 13:19:19.770255       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 13:19:19.770263       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 13:19:19.861224       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 13:19:19.861256       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 13:19:19.861308       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2febdcd938669473f518e9a8f67000a148c55f2bb6d7de8d050c54d5ee212ee9] <==
	I1019 13:19:12.167977       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:19:12.170066       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:19:12.170108       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:19:12.171030       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 13:19:12.171092       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1019 13:19:12.180238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 13:19:12.180429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 13:19:12.181001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 13:19:12.181108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 13:19:12.181336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 13:19:12.181446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 13:19:12.181516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 13:19:12.181589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 13:19:12.181711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 13:19:12.181786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 13:19:12.182754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 13:19:12.182903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 13:19:12.182981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 13:19:12.183052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 13:19:12.183153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 13:19:12.184709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1019 13:19:12.187737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 13:19:12.187835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 13:19:12.187879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1019 13:19:13.371127       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 13:19:14 newest-cni-895642 kubelet[1304]: I1019 13:19:14.383019    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8deffd2e7a15743e9038f0a7b49bf21c-usr-local-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-895642\" (UID: \"8deffd2e7a15743e9038f0a7b49bf21c\") " pod="kube-system/kube-controller-manager-newest-cni-895642"
	Oct 19 13:19:14 newest-cni-895642 kubelet[1304]: I1019 13:19:14.383038    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8deffd2e7a15743e9038f0a7b49bf21c-usr-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-895642\" (UID: \"8deffd2e7a15743e9038f0a7b49bf21c\") " pod="kube-system/kube-controller-manager-newest-cni-895642"
	Oct 19 13:19:14 newest-cni-895642 kubelet[1304]: I1019 13:19:14.953837    1304 apiserver.go:52] "Watching apiserver"
	Oct 19 13:19:14 newest-cni-895642 kubelet[1304]: I1019 13:19:14.979778    1304 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 19 13:19:15 newest-cni-895642 kubelet[1304]: I1019 13:19:15.099222    1304 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-895642"
	Oct 19 13:19:15 newest-cni-895642 kubelet[1304]: I1019 13:19:15.099641    1304 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-895642"
	Oct 19 13:19:15 newest-cni-895642 kubelet[1304]: E1019 13:19:15.121753    1304 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-895642\" already exists" pod="kube-system/kube-scheduler-newest-cni-895642"
	Oct 19 13:19:15 newest-cni-895642 kubelet[1304]: I1019 13:19:15.126254    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-895642" podStartSLOduration=3.12623462 podStartE2EDuration="3.12623462s" podCreationTimestamp="2025-10-19 13:19:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:19:15.096978614 +0000 UTC m=+1.247370830" watchObservedRunningTime="2025-10-19 13:19:15.12623462 +0000 UTC m=+1.276626852"
	Oct 19 13:19:15 newest-cni-895642 kubelet[1304]: I1019 13:19:15.126423    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-895642" podStartSLOduration=1.126416325 podStartE2EDuration="1.126416325s" podCreationTimestamp="2025-10-19 13:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:19:15.126358182 +0000 UTC m=+1.276750808" watchObservedRunningTime="2025-10-19 13:19:15.126416325 +0000 UTC m=+1.276808541"
	Oct 19 13:19:15 newest-cni-895642 kubelet[1304]: E1019 13:19:15.126613    1304 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-895642\" already exists" pod="kube-system/etcd-newest-cni-895642"
	Oct 19 13:19:15 newest-cni-895642 kubelet[1304]: I1019 13:19:15.167923    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-895642" podStartSLOduration=1.167903611 podStartE2EDuration="1.167903611s" podCreationTimestamp="2025-10-19 13:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:19:15.141599917 +0000 UTC m=+1.291992133" watchObservedRunningTime="2025-10-19 13:19:15.167903611 +0000 UTC m=+1.318295844"
	Oct 19 13:19:15 newest-cni-895642 kubelet[1304]: I1019 13:19:15.168054    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-895642" podStartSLOduration=1.168047154 podStartE2EDuration="1.168047154s" podCreationTimestamp="2025-10-19 13:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:19:15.167855119 +0000 UTC m=+1.318247343" watchObservedRunningTime="2025-10-19 13:19:15.168047154 +0000 UTC m=+1.318439370"
	Oct 19 13:19:18 newest-cni-895642 kubelet[1304]: I1019 13:19:18.158829    1304 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 19 13:19:18 newest-cni-895642 kubelet[1304]: I1019 13:19:18.159425    1304 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 19 13:19:18 newest-cni-895642 kubelet[1304]: I1019 13:19:18.922680    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ce496c6-376a-47a7-adb5-90a20dfe8e09-xtables-lock\") pod \"kube-proxy-f8v8j\" (UID: \"4ce496c6-376a-47a7-adb5-90a20dfe8e09\") " pod="kube-system/kube-proxy-f8v8j"
	Oct 19 13:19:18 newest-cni-895642 kubelet[1304]: I1019 13:19:18.922739    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ce496c6-376a-47a7-adb5-90a20dfe8e09-lib-modules\") pod \"kube-proxy-f8v8j\" (UID: \"4ce496c6-376a-47a7-adb5-90a20dfe8e09\") " pod="kube-system/kube-proxy-f8v8j"
	Oct 19 13:19:18 newest-cni-895642 kubelet[1304]: I1019 13:19:18.922760    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/348e9181-c940-4d5f-b47a-562fbdd88f99-cni-cfg\") pod \"kindnet-wtcgs\" (UID: \"348e9181-c940-4d5f-b47a-562fbdd88f99\") " pod="kube-system/kindnet-wtcgs"
	Oct 19 13:19:18 newest-cni-895642 kubelet[1304]: I1019 13:19:18.922783    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbnzd\" (UniqueName: \"kubernetes.io/projected/4ce496c6-376a-47a7-adb5-90a20dfe8e09-kube-api-access-rbnzd\") pod \"kube-proxy-f8v8j\" (UID: \"4ce496c6-376a-47a7-adb5-90a20dfe8e09\") " pod="kube-system/kube-proxy-f8v8j"
	Oct 19 13:19:18 newest-cni-895642 kubelet[1304]: I1019 13:19:18.922804    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/348e9181-c940-4d5f-b47a-562fbdd88f99-lib-modules\") pod \"kindnet-wtcgs\" (UID: \"348e9181-c940-4d5f-b47a-562fbdd88f99\") " pod="kube-system/kindnet-wtcgs"
	Oct 19 13:19:18 newest-cni-895642 kubelet[1304]: I1019 13:19:18.922822    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4ce496c6-376a-47a7-adb5-90a20dfe8e09-kube-proxy\") pod \"kube-proxy-f8v8j\" (UID: \"4ce496c6-376a-47a7-adb5-90a20dfe8e09\") " pod="kube-system/kube-proxy-f8v8j"
	Oct 19 13:19:18 newest-cni-895642 kubelet[1304]: I1019 13:19:18.922845    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/348e9181-c940-4d5f-b47a-562fbdd88f99-xtables-lock\") pod \"kindnet-wtcgs\" (UID: \"348e9181-c940-4d5f-b47a-562fbdd88f99\") " pod="kube-system/kindnet-wtcgs"
	Oct 19 13:19:18 newest-cni-895642 kubelet[1304]: I1019 13:19:18.922864    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwhmh\" (UniqueName: \"kubernetes.io/projected/348e9181-c940-4d5f-b47a-562fbdd88f99-kube-api-access-pwhmh\") pod \"kindnet-wtcgs\" (UID: \"348e9181-c940-4d5f-b47a-562fbdd88f99\") " pod="kube-system/kindnet-wtcgs"
	Oct 19 13:19:19 newest-cni-895642 kubelet[1304]: I1019 13:19:19.034197    1304 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 19 13:19:20 newest-cni-895642 kubelet[1304]: I1019 13:19:20.174611    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wtcgs" podStartSLOduration=2.174588939 podStartE2EDuration="2.174588939s" podCreationTimestamp="2025-10-19 13:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:19:20.146687159 +0000 UTC m=+6.297079383" watchObservedRunningTime="2025-10-19 13:19:20.174588939 +0000 UTC m=+6.324981196"
	Oct 19 13:19:21 newest-cni-895642 kubelet[1304]: I1019 13:19:21.931411    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f8v8j" podStartSLOduration=3.931387339 podStartE2EDuration="3.931387339s" podCreationTimestamp="2025-10-19 13:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 13:19:20.211595377 +0000 UTC m=+6.361987609" watchObservedRunningTime="2025-10-19 13:19:21.931387339 +0000 UTC m=+8.081779555"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895642 -n newest-cni-895642
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-895642 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-gbtfz storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-895642 describe pod coredns-66bc5c9577-gbtfz storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-895642 describe pod coredns-66bc5c9577-gbtfz storage-provisioner: exit status 1 (84.80934ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-gbtfz" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-895642 describe pod coredns-66bc5c9577-gbtfz storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-455348 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-455348 --alsologtostderr -v=1: exit status 80 (2.519243428s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-455348 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 13:19:35.224667  504677 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:19:35.224977  504677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:19:35.225014  504677 out.go:374] Setting ErrFile to fd 2...
	I1019 13:19:35.225034  504677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:19:35.225354  504677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:19:35.225665  504677 out.go:368] Setting JSON to false
	I1019 13:19:35.225747  504677 mustload.go:65] Loading cluster: default-k8s-diff-port-455348
	I1019 13:19:35.226211  504677 config.go:182] Loaded profile config "default-k8s-diff-port-455348": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:19:35.226699  504677 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-455348 --format={{.State.Status}}
	I1019 13:19:35.264332  504677 host.go:66] Checking if "default-k8s-diff-port-455348" exists ...
	I1019 13:19:35.264769  504677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:19:35.401918  504677 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-19 13:19:35.367504845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:19:35.402646  504677 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-455348 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 13:19:35.406238  504677 out.go:179] * Pausing node default-k8s-diff-port-455348 ... 
	I1019 13:19:35.409193  504677 host.go:66] Checking if "default-k8s-diff-port-455348" exists ...
	I1019 13:19:35.409565  504677 ssh_runner.go:195] Run: systemctl --version
	I1019 13:19:35.409616  504677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-455348
	I1019 13:19:35.450013  504677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/default-k8s-diff-port-455348/id_rsa Username:docker}
	I1019 13:19:35.568719  504677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:19:35.591436  504677 pause.go:52] kubelet running: true
	I1019 13:19:35.591521  504677 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:19:36.029288  504677 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:19:36.029385  504677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:19:36.173040  504677 cri.go:89] found id: "aa8fdf86ae37de45052d5f9afe9fd03316efa20075210f8c3437382ef6fb7292"
	I1019 13:19:36.173062  504677 cri.go:89] found id: "f6bc238b7f538a7f20fc0f48f49813daa4ac28c616e85783e9483bfc32f490fc"
	I1019 13:19:36.173067  504677 cri.go:89] found id: "64b3263c4cb9377c973c0405da32ab9f8ae72ae6589d72bc7ad0b1fc5dc41c04"
	I1019 13:19:36.173076  504677 cri.go:89] found id: "7b9203ac4a1b0f71c0dd63a1f8c349a569a3ce4f03d54c74eaa8ea2b7fa8839e"
	I1019 13:19:36.173080  504677 cri.go:89] found id: "77fee27408687abc67ef099c98ed62f58cae326fcb4d0fe2e71f7876a1fa488a"
	I1019 13:19:36.173083  504677 cri.go:89] found id: "d68e31f9ddc629258adae34a5c4914451d4039479223db3fc89b9ec518005fc0"
	I1019 13:19:36.173086  504677 cri.go:89] found id: "9dc424071c1b92771542bfccd38e435461e8182ac00adb300909438d1cbf9b8f"
	I1019 13:19:36.173090  504677 cri.go:89] found id: "b34e96695557c6959cce715a57b32eef60a662626ab95fd5b08a3505f2cfe53a"
	I1019 13:19:36.173092  504677 cri.go:89] found id: "e5b09162fcaf4578399f5a03831d7d61cf4bfd1901478ea7fed991f19b9f174e"
	I1019 13:19:36.173101  504677 cri.go:89] found id: "58deb2a42f9abf760898d192ccbd4c49190875c9116b13743bcd893003255084"
	I1019 13:19:36.173104  504677 cri.go:89] found id: "f1059e6092955af4f3316486a54cacbf36083e9dda490f278b0fb3ef045f8eb2"
	I1019 13:19:36.173107  504677 cri.go:89] found id: ""
	I1019 13:19:36.173154  504677 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:19:36.188595  504677 retry.go:31] will retry after 186.055562ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:19:36Z" level=error msg="open /run/runc: no such file or directory"
	I1019 13:19:36.374978  504677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:19:36.402310  504677 pause.go:52] kubelet running: false
	I1019 13:19:36.402371  504677 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:19:36.696606  504677 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:19:36.696727  504677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:19:36.819470  504677 cri.go:89] found id: "aa8fdf86ae37de45052d5f9afe9fd03316efa20075210f8c3437382ef6fb7292"
	I1019 13:19:36.819548  504677 cri.go:89] found id: "f6bc238b7f538a7f20fc0f48f49813daa4ac28c616e85783e9483bfc32f490fc"
	I1019 13:19:36.819569  504677 cri.go:89] found id: "64b3263c4cb9377c973c0405da32ab9f8ae72ae6589d72bc7ad0b1fc5dc41c04"
	I1019 13:19:36.819593  504677 cri.go:89] found id: "7b9203ac4a1b0f71c0dd63a1f8c349a569a3ce4f03d54c74eaa8ea2b7fa8839e"
	I1019 13:19:36.819621  504677 cri.go:89] found id: "77fee27408687abc67ef099c98ed62f58cae326fcb4d0fe2e71f7876a1fa488a"
	I1019 13:19:36.819644  504677 cri.go:89] found id: "d68e31f9ddc629258adae34a5c4914451d4039479223db3fc89b9ec518005fc0"
	I1019 13:19:36.819665  504677 cri.go:89] found id: "9dc424071c1b92771542bfccd38e435461e8182ac00adb300909438d1cbf9b8f"
	I1019 13:19:36.819685  504677 cri.go:89] found id: "b34e96695557c6959cce715a57b32eef60a662626ab95fd5b08a3505f2cfe53a"
	I1019 13:19:36.819705  504677 cri.go:89] found id: "e5b09162fcaf4578399f5a03831d7d61cf4bfd1901478ea7fed991f19b9f174e"
	I1019 13:19:36.819735  504677 cri.go:89] found id: "58deb2a42f9abf760898d192ccbd4c49190875c9116b13743bcd893003255084"
	I1019 13:19:36.819762  504677 cri.go:89] found id: "f1059e6092955af4f3316486a54cacbf36083e9dda490f278b0fb3ef045f8eb2"
	I1019 13:19:36.819782  504677 cri.go:89] found id: ""
	I1019 13:19:36.819871  504677 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:19:36.837029  504677 retry.go:31] will retry after 349.101183ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:19:36Z" level=error msg="open /run/runc: no such file or directory"
	I1019 13:19:37.186582  504677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:19:37.209971  504677 pause.go:52] kubelet running: false
	I1019 13:19:37.210090  504677 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:19:37.498860  504677 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:19:37.499004  504677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:19:37.625486  504677 cri.go:89] found id: "aa8fdf86ae37de45052d5f9afe9fd03316efa20075210f8c3437382ef6fb7292"
	I1019 13:19:37.625574  504677 cri.go:89] found id: "f6bc238b7f538a7f20fc0f48f49813daa4ac28c616e85783e9483bfc32f490fc"
	I1019 13:19:37.625594  504677 cri.go:89] found id: "64b3263c4cb9377c973c0405da32ab9f8ae72ae6589d72bc7ad0b1fc5dc41c04"
	I1019 13:19:37.625613  504677 cri.go:89] found id: "7b9203ac4a1b0f71c0dd63a1f8c349a569a3ce4f03d54c74eaa8ea2b7fa8839e"
	I1019 13:19:37.625648  504677 cri.go:89] found id: "77fee27408687abc67ef099c98ed62f58cae326fcb4d0fe2e71f7876a1fa488a"
	I1019 13:19:37.625666  504677 cri.go:89] found id: "d68e31f9ddc629258adae34a5c4914451d4039479223db3fc89b9ec518005fc0"
	I1019 13:19:37.625715  504677 cri.go:89] found id: "9dc424071c1b92771542bfccd38e435461e8182ac00adb300909438d1cbf9b8f"
	I1019 13:19:37.625740  504677 cri.go:89] found id: "b34e96695557c6959cce715a57b32eef60a662626ab95fd5b08a3505f2cfe53a"
	I1019 13:19:37.625760  504677 cri.go:89] found id: "e5b09162fcaf4578399f5a03831d7d61cf4bfd1901478ea7fed991f19b9f174e"
	I1019 13:19:37.625782  504677 cri.go:89] found id: "58deb2a42f9abf760898d192ccbd4c49190875c9116b13743bcd893003255084"
	I1019 13:19:37.625801  504677 cri.go:89] found id: "f1059e6092955af4f3316486a54cacbf36083e9dda490f278b0fb3ef045f8eb2"
	I1019 13:19:37.625826  504677 cri.go:89] found id: ""
	I1019 13:19:37.625893  504677 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:19:37.641718  504677 out.go:203] 
	W1019 13:19:37.644634  504677 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:19:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:19:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 13:19:37.644846  504677 out.go:285] * 
	* 
	W1019 13:19:37.653006  504677 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 13:19:37.655969  504677 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-455348 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-455348
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-455348:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7",
	        "Created": "2025-10-19T13:16:44.03379204Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496701,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T13:18:21.840756805Z",
	            "FinishedAt": "2025-10-19T13:18:21.001094961Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7/hostname",
	        "HostsPath": "/var/lib/docker/containers/6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7/hosts",
	        "LogPath": "/var/lib/docker/containers/6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7/6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7-json.log",
	        "Name": "/default-k8s-diff-port-455348",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-455348:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-455348",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7",
	                "LowerDir": "/var/lib/docker/overlay2/69c3312626a00a0a29de39da0ee3edd7eb25e0b33a22ef9214343606d7a497c2-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69c3312626a00a0a29de39da0ee3edd7eb25e0b33a22ef9214343606d7a497c2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69c3312626a00a0a29de39da0ee3edd7eb25e0b33a22ef9214343606d7a497c2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69c3312626a00a0a29de39da0ee3edd7eb25e0b33a22ef9214343606d7a497c2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-455348",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-455348/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-455348",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-455348",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-455348",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "206c6caa27503d1e4d7946d22471664704abd1474ab988c95d0c9f6ae9bd541d",
	            "SandboxKey": "/var/run/docker/netns/206c6caa2750",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-455348": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:f8:1c:40:18:3a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "feb5b6cb71ad4f1814069d9c1fecfa12355d747dd07980e633df65a307f6c04b",
	                    "EndpointID": "dc48b2e24d13c19873e3da1ce5a751a52b1bc85db1269209adc65cc8d0a34b3b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-455348",
	                        "6519411d3b62"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-455348 -n default-k8s-diff-port-455348
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-455348 -n default-k8s-diff-port-455348: exit status 2 (389.980506ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-455348 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-455348 logs -n 25: (1.774033993s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-108149 image list --format=json                                                                                                                                                                                                    │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ pause   │ -p no-preload-108149 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │                     │
	│ delete  │ -p no-preload-108149                                                                                                                                                                                                                          │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ delete  │ -p no-preload-108149                                                                                                                                                                                                                          │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ delete  │ -p disable-driver-mounts-418719                                                                                                                                                                                                               │ disable-driver-mounts-418719 │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ start   │ -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-834340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │                     │
	│ stop    │ -p embed-certs-834340 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-834340 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:17 UTC │
	│ start   │ -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-455348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-455348 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-455348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ start   │ -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:19 UTC │
	│ image   │ embed-certs-834340 image list --format=json                                                                                                                                                                                                   │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ pause   │ -p embed-certs-834340 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │                     │
	│ delete  │ -p embed-certs-834340                                                                                                                                                                                                                         │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ delete  │ -p embed-certs-834340                                                                                                                                                                                                                         │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ start   │ -p newest-cni-895642 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-895642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │                     │
	│ stop    │ -p newest-cni-895642 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │ 19 Oct 25 13:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-895642 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │ 19 Oct 25 13:19 UTC │
	│ start   │ -p newest-cni-895642 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │                     │
	│ image   │ default-k8s-diff-port-455348 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │ 19 Oct 25 13:19 UTC │
	│ pause   │ -p default-k8s-diff-port-455348 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 13:19:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 13:19:25.169345  503186 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:19:25.169574  503186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:19:25.169605  503186 out.go:374] Setting ErrFile to fd 2...
	I1019 13:19:25.169626  503186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:19:25.169968  503186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:19:25.170447  503186 out.go:368] Setting JSON to false
	I1019 13:19:25.171637  503186 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10916,"bootTime":1760869050,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 13:19:25.171742  503186 start.go:141] virtualization:  
	I1019 13:19:25.174991  503186 out.go:179] * [newest-cni-895642] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 13:19:25.179084  503186 notify.go:220] Checking for updates...
	I1019 13:19:25.180010  503186 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:19:25.183043  503186 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:19:25.186046  503186 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:19:25.189047  503186 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 13:19:25.192100  503186 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 13:19:25.195015  503186 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:19:25.198292  503186 config.go:182] Loaded profile config "newest-cni-895642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:19:25.198883  503186 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:19:25.226453  503186 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 13:19:25.226605  503186 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:19:25.295086  503186 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:19:25.278659503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:19:25.295192  503186 docker.go:318] overlay module found
	I1019 13:19:25.298224  503186 out.go:179] * Using the docker driver based on existing profile
	I1019 13:19:25.301721  503186 start.go:305] selected driver: docker
	I1019 13:19:25.301740  503186 start.go:925] validating driver "docker" against &{Name:newest-cni-895642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:19:25.301843  503186 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:19:25.302559  503186 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:19:25.357591  503186 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:19:25.348074344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:19:25.357979  503186 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 13:19:25.358017  503186 cni.go:84] Creating CNI manager for ""
	I1019 13:19:25.358086  503186 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:19:25.358136  503186 start.go:349] cluster config:
	{Name:newest-cni-895642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:19:25.361457  503186 out.go:179] * Starting "newest-cni-895642" primary control-plane node in "newest-cni-895642" cluster
	I1019 13:19:25.364282  503186 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 13:19:25.367200  503186 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 13:19:25.370018  503186 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:19:25.370106  503186 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 13:19:25.370116  503186 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 13:19:25.370138  503186 cache.go:58] Caching tarball of preloaded images
	I1019 13:19:25.370227  503186 preload.go:233] Found /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 13:19:25.370241  503186 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 13:19:25.370363  503186 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/config.json ...
	I1019 13:19:25.389423  503186 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 13:19:25.389445  503186 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 13:19:25.389464  503186 cache.go:232] Successfully downloaded all kic artifacts
	I1019 13:19:25.389487  503186 start.go:360] acquireMachinesLock for newest-cni-895642: {Name:mke5c4230882c7c86983f0da461147450e8e886d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:19:25.389556  503186 start.go:364] duration metric: took 46.253µs to acquireMachinesLock for "newest-cni-895642"
	I1019 13:19:25.389579  503186 start.go:96] Skipping create...Using existing machine configuration
	I1019 13:19:25.389586  503186 fix.go:54] fixHost starting: 
	I1019 13:19:25.389918  503186 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:25.406454  503186 fix.go:112] recreateIfNeeded on newest-cni-895642: state=Stopped err=<nil>
	W1019 13:19:25.406489  503186 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 13:19:25.409740  503186 out.go:252] * Restarting existing docker container for "newest-cni-895642" ...
	I1019 13:19:25.409823  503186 cli_runner.go:164] Run: docker start newest-cni-895642
	I1019 13:19:25.672026  503186 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:25.703460  503186 kic.go:430] container "newest-cni-895642" state is running.
	I1019 13:19:25.704103  503186 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895642
	I1019 13:19:25.727547  503186 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/config.json ...
	I1019 13:19:25.727779  503186 machine.go:93] provisionDockerMachine start ...
	I1019 13:19:25.727860  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:25.756624  503186 main.go:141] libmachine: Using SSH client type: native
	I1019 13:19:25.757520  503186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1019 13:19:25.757541  503186 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 13:19:25.758657  503186 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1019 13:19:28.913337  503186 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-895642
	
	I1019 13:19:28.913369  503186 ubuntu.go:182] provisioning hostname "newest-cni-895642"
	I1019 13:19:28.913434  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:28.933553  503186 main.go:141] libmachine: Using SSH client type: native
	I1019 13:19:28.934046  503186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1019 13:19:28.934066  503186 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-895642 && echo "newest-cni-895642" | sudo tee /etc/hostname
	I1019 13:19:29.099311  503186 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-895642
	
	I1019 13:19:29.099432  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:29.119806  503186 main.go:141] libmachine: Using SSH client type: native
	I1019 13:19:29.120136  503186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1019 13:19:29.120158  503186 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-895642' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-895642/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-895642' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 13:19:29.277884  503186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 13:19:29.277914  503186 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 13:19:29.277946  503186 ubuntu.go:190] setting up certificates
	I1019 13:19:29.277961  503186 provision.go:84] configureAuth start
	I1019 13:19:29.278034  503186 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895642
	I1019 13:19:29.301808  503186 provision.go:143] copyHostCerts
	I1019 13:19:29.301873  503186 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem, removing ...
	I1019 13:19:29.301892  503186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem
	I1019 13:19:29.301967  503186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 13:19:29.302085  503186 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem, removing ...
	I1019 13:19:29.302090  503186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem
	I1019 13:19:29.302117  503186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 13:19:29.302199  503186 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem, removing ...
	I1019 13:19:29.302205  503186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem
	I1019 13:19:29.302233  503186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 13:19:29.302290  503186 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.newest-cni-895642 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-895642]
	I1019 13:19:29.374167  503186 provision.go:177] copyRemoteCerts
	I1019 13:19:29.374259  503186 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 13:19:29.374318  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:29.391140  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:29.493830  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 13:19:29.514390  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 13:19:29.532916  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 13:19:29.550713  503186 provision.go:87] duration metric: took 272.732509ms to configureAuth
	I1019 13:19:29.550741  503186 ubuntu.go:206] setting minikube options for container-runtime
	I1019 13:19:29.550946  503186 config.go:182] Loaded profile config "newest-cni-895642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:19:29.551070  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:29.569921  503186 main.go:141] libmachine: Using SSH client type: native
	I1019 13:19:29.570253  503186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1019 13:19:29.570274  503186 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 13:19:29.873306  503186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 13:19:29.873332  503186 machine.go:96] duration metric: took 4.145535815s to provisionDockerMachine
	I1019 13:19:29.873352  503186 start.go:293] postStartSetup for "newest-cni-895642" (driver="docker")
	I1019 13:19:29.873364  503186 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 13:19:29.873444  503186 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 13:19:29.873490  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:29.890148  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:29.997258  503186 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 13:19:30.002593  503186 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 13:19:30.002644  503186 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 13:19:30.002659  503186 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/addons for local assets ...
	I1019 13:19:30.002738  503186 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/files for local assets ...
	I1019 13:19:30.002829  503186 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem -> 2945182.pem in /etc/ssl/certs
	I1019 13:19:30.002936  503186 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 13:19:30.029850  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:19:30.066991  503186 start.go:296] duration metric: took 193.620206ms for postStartSetup
	I1019 13:19:30.067112  503186 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 13:19:30.067248  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:30.088223  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:30.191529  503186 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 13:19:30.196369  503186 fix.go:56] duration metric: took 4.806775977s for fixHost
	I1019 13:19:30.196395  503186 start.go:83] releasing machines lock for "newest-cni-895642", held for 4.806827736s
	I1019 13:19:30.196471  503186 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895642
	I1019 13:19:30.214998  503186 ssh_runner.go:195] Run: cat /version.json
	I1019 13:19:30.215056  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:30.215139  503186 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 13:19:30.215199  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:30.240016  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:30.241564  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:30.346049  503186 ssh_runner.go:195] Run: systemctl --version
	I1019 13:19:30.440967  503186 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 13:19:30.476294  503186 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 13:19:30.480767  503186 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 13:19:30.480880  503186 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 13:19:30.488567  503186 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 13:19:30.488602  503186 start.go:495] detecting cgroup driver to use...
	I1019 13:19:30.488634  503186 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 13:19:30.488699  503186 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 13:19:30.504768  503186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 13:19:30.517613  503186 docker.go:218] disabling cri-docker service (if available) ...
	I1019 13:19:30.517744  503186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 13:19:30.534697  503186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 13:19:30.547999  503186 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 13:19:30.666826  503186 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 13:19:30.789564  503186 docker.go:234] disabling docker service ...
	I1019 13:19:30.789718  503186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 13:19:30.805667  503186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 13:19:30.827277  503186 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 13:19:30.950983  503186 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 13:19:31.080274  503186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 13:19:31.095662  503186 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 13:19:31.111621  503186 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 13:19:31.111694  503186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.122130  503186 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 13:19:31.122227  503186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.132706  503186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.142968  503186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.152846  503186 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 13:19:31.161851  503186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.171479  503186 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.180553  503186 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.190292  503186 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 13:19:31.198459  503186 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 13:19:31.205996  503186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:19:31.330350  503186 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 13:19:31.465643  503186 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 13:19:31.465758  503186 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 13:19:31.469621  503186 start.go:563] Will wait 60s for crictl version
	I1019 13:19:31.469847  503186 ssh_runner.go:195] Run: which crictl
	I1019 13:19:31.473844  503186 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 13:19:31.498952  503186 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 13:19:31.499106  503186 ssh_runner.go:195] Run: crio --version
	I1019 13:19:31.528942  503186 ssh_runner.go:195] Run: crio --version
	I1019 13:19:31.561590  503186 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 13:19:31.564368  503186 cli_runner.go:164] Run: docker network inspect newest-cni-895642 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:19:31.581115  503186 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 13:19:31.584948  503186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:19:31.597969  503186 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1019 13:19:31.600776  503186 kubeadm.go:883] updating cluster {Name:newest-cni-895642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 13:19:31.600918  503186 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:19:31.601013  503186 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:19:31.635369  503186 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:19:31.635393  503186 crio.go:433] Images already preloaded, skipping extraction
	I1019 13:19:31.635446  503186 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:19:31.662206  503186 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:19:31.662228  503186 cache_images.go:85] Images are preloaded, skipping loading
	I1019 13:19:31.662251  503186 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1019 13:19:31.662400  503186 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-895642 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 13:19:31.662503  503186 ssh_runner.go:195] Run: crio config
	I1019 13:19:31.740719  503186 cni.go:84] Creating CNI manager for ""
	I1019 13:19:31.740744  503186 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:19:31.740791  503186 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1019 13:19:31.740824  503186 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-895642 NodeName:newest-cni-895642 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 13:19:31.740960  503186 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-895642"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 13:19:31.741033  503186 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 13:19:31.749166  503186 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 13:19:31.749265  503186 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 13:19:31.757972  503186 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 13:19:31.772492  503186 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 13:19:31.785933  503186 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1019 13:19:31.799868  503186 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 13:19:31.803838  503186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:19:31.813960  503186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:19:31.927165  503186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:19:31.943424  503186 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642 for IP: 192.168.85.2
	I1019 13:19:31.943448  503186 certs.go:195] generating shared ca certs ...
	I1019 13:19:31.943464  503186 certs.go:227] acquiring lock for ca certs: {Name:mk8f2f1c683cf5104ef70f6f3d59bf8f6240d633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:19:31.943596  503186 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key
	I1019 13:19:31.943651  503186 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key
	I1019 13:19:31.943663  503186 certs.go:257] generating profile certs ...
	I1019 13:19:31.943751  503186 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/client.key
	I1019 13:19:31.943815  503186 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.key.d4125fb8
	I1019 13:19:31.943857  503186 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/proxy-client.key
	I1019 13:19:31.943986  503186 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem (1338 bytes)
	W1019 13:19:31.944020  503186 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518_empty.pem, impossibly tiny 0 bytes
	I1019 13:19:31.944033  503186 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 13:19:31.944067  503186 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem (1082 bytes)
	I1019 13:19:31.944096  503186 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem (1123 bytes)
	I1019 13:19:31.944123  503186 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem (1679 bytes)
	I1019 13:19:31.944168  503186 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:19:31.944773  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 13:19:31.968205  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 13:19:31.988543  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 13:19:32.011799  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 13:19:32.034572  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 13:19:32.058346  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 13:19:32.081243  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 13:19:32.107781  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 13:19:32.139561  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /usr/share/ca-certificates/2945182.pem (1708 bytes)
	I1019 13:19:32.167240  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 13:19:32.190650  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem --> /usr/share/ca-certificates/294518.pem (1338 bytes)
	I1019 13:19:32.210444  503186 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 13:19:32.225354  503186 ssh_runner.go:195] Run: openssl version
	I1019 13:19:32.231708  503186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 13:19:32.240811  503186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:19:32.244713  503186 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:19:32.244786  503186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:19:32.294059  503186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 13:19:32.302549  503186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294518.pem && ln -fs /usr/share/ca-certificates/294518.pem /etc/ssl/certs/294518.pem"
	I1019 13:19:32.312139  503186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294518.pem
	I1019 13:19:32.315693  503186 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:20 /usr/share/ca-certificates/294518.pem
	I1019 13:19:32.315781  503186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294518.pem
	I1019 13:19:32.356958  503186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294518.pem /etc/ssl/certs/51391683.0"
	I1019 13:19:32.365153  503186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2945182.pem && ln -fs /usr/share/ca-certificates/2945182.pem /etc/ssl/certs/2945182.pem"
	I1019 13:19:32.373578  503186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2945182.pem
	I1019 13:19:32.377296  503186 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:20 /usr/share/ca-certificates/2945182.pem
	I1019 13:19:32.377391  503186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2945182.pem
	I1019 13:19:32.421016  503186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2945182.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 13:19:32.429783  503186 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 13:19:32.433636  503186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 13:19:32.475552  503186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 13:19:32.516702  503186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 13:19:32.557815  503186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 13:19:32.600480  503186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 13:19:32.650589  503186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 13:19:32.714948  503186 kubeadm.go:400] StartCluster: {Name:newest-cni-895642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:19:32.715039  503186 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 13:19:32.715138  503186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 13:19:32.785785  503186 cri.go:89] found id: "df7751d1304bdecb2f8c2da9564eb9648edb59cf776486a8eab0e66763b2a99a"
	I1019 13:19:32.785808  503186 cri.go:89] found id: ""
	I1019 13:19:32.785895  503186 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 13:19:32.812260  503186 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:19:32Z" level=error msg="open /run/runc: no such file or directory"
	I1019 13:19:32.812389  503186 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 13:19:32.834629  503186 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 13:19:32.834659  503186 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 13:19:32.834752  503186 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 13:19:32.860259  503186 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 13:19:32.860861  503186 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-895642" does not appear in /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:19:32.861329  503186 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-292654/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-895642" cluster setting kubeconfig missing "newest-cni-895642" context setting]
	I1019 13:19:32.862441  503186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:19:32.866956  503186 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 13:19:32.894204  503186 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 13:19:32.894240  503186 kubeadm.go:601] duration metric: took 59.575017ms to restartPrimaryControlPlane
	I1019 13:19:32.894250  503186 kubeadm.go:402] duration metric: took 179.312154ms to StartCluster
	I1019 13:19:32.894265  503186 settings.go:142] acquiring lock: {Name:mk1099ab6cbf86eca031b5f8e2b43952c9c0f84f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:19:32.894332  503186 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:19:32.895329  503186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:19:32.895543  503186 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:19:32.895955  503186 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 13:19:32.896047  503186 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-895642"
	I1019 13:19:32.896069  503186 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-895642"
	W1019 13:19:32.896075  503186 addons.go:247] addon storage-provisioner should already be in state true
	I1019 13:19:32.896106  503186 host.go:66] Checking if "newest-cni-895642" exists ...
	I1019 13:19:32.896714  503186 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:32.896881  503186 config.go:182] Loaded profile config "newest-cni-895642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:19:32.896933  503186 addons.go:69] Setting dashboard=true in profile "newest-cni-895642"
	I1019 13:19:32.896944  503186 addons.go:238] Setting addon dashboard=true in "newest-cni-895642"
	W1019 13:19:32.896950  503186 addons.go:247] addon dashboard should already be in state true
	I1019 13:19:32.896967  503186 host.go:66] Checking if "newest-cni-895642" exists ...
	I1019 13:19:32.897398  503186 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:32.898253  503186 addons.go:69] Setting default-storageclass=true in profile "newest-cni-895642"
	I1019 13:19:32.898321  503186 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-895642"
	I1019 13:19:32.898658  503186 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:32.899976  503186 out.go:179] * Verifying Kubernetes components...
	I1019 13:19:32.903268  503186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:19:32.955871  503186 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 13:19:32.958895  503186 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:19:32.958923  503186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 13:19:32.958989  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:32.959467  503186 addons.go:238] Setting addon default-storageclass=true in "newest-cni-895642"
	W1019 13:19:32.959482  503186 addons.go:247] addon default-storageclass should already be in state true
	I1019 13:19:32.959506  503186 host.go:66] Checking if "newest-cni-895642" exists ...
	I1019 13:19:32.959935  503186 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:32.977658  503186 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 13:19:32.982573  503186 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 13:19:32.985533  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 13:19:32.985564  503186 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 13:19:32.985636  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:33.020700  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:33.045991  503186 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 13:19:33.046014  503186 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 13:19:33.046082  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:33.055558  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:33.097899  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:33.300589  503186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:19:33.323301  503186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:19:33.338409  503186 api_server.go:52] waiting for apiserver process to appear ...
	I1019 13:19:33.338537  503186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 13:19:33.373599  503186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 13:19:33.399805  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 13:19:33.399871  503186 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 13:19:33.446692  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 13:19:33.446720  503186 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 13:19:33.535860  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 13:19:33.535897  503186 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 13:19:33.598982  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 13:19:33.599016  503186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 13:19:33.632171  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 13:19:33.632199  503186 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 13:19:33.651270  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 13:19:33.651296  503186 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 13:19:33.664849  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 13:19:33.664876  503186 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 13:19:33.678940  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 13:19:33.678961  503186 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 13:19:33.692305  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 13:19:33.692329  503186 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 13:19:33.711585  503186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.050339336Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.053802784Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.053962713Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.054030242Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.057270253Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.057469378Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.05754581Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.063745468Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.063797596Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.063823598Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.068167625Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.068220278Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.222697714Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2b767052-04e2-41f3-b7c3-e9ccdfdd59fc name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.223636476Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d45ae456-48d8-4831-aa6f-78e1edd12404 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.224661056Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4/dashboard-metrics-scraper" id=70bcce43-10ce-43a8-b2e5-a1df6761bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.224922713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.232296254Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.232968435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.251579496Z" level=info msg="Created container 58deb2a42f9abf760898d192ccbd4c49190875c9116b13743bcd893003255084: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4/dashboard-metrics-scraper" id=70bcce43-10ce-43a8-b2e5-a1df6761bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.252511759Z" level=info msg="Starting container: 58deb2a42f9abf760898d192ccbd4c49190875c9116b13743bcd893003255084" id=d50306c8-8545-4524-b528-a182eb25b730 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.254717934Z" level=info msg="Started container" PID=1724 containerID=58deb2a42f9abf760898d192ccbd4c49190875c9116b13743bcd893003255084 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4/dashboard-metrics-scraper id=d50306c8-8545-4524-b528-a182eb25b730 name=/runtime.v1.RuntimeService/StartContainer sandboxID=72c5aaf380e9a01c6324ba887e17b66a16c34b282a5b7dc92102e6716fee0dc4
	Oct 19 13:19:27 default-k8s-diff-port-455348 conmon[1722]: conmon 58deb2a42f9abf760898 <ninfo>: container 1724 exited with status 1
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.566427238Z" level=info msg="Removing container: dfa8474a2bcb75dc9e48fe4a9fd1a41cbfbc8d3304281c871b556b0e9107cad0" id=95d81339-d871-482c-a2fc-9d2b81b60b9f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.574789281Z" level=info msg="Error loading conmon cgroup of container dfa8474a2bcb75dc9e48fe4a9fd1a41cbfbc8d3304281c871b556b0e9107cad0: cgroup deleted" id=95d81339-d871-482c-a2fc-9d2b81b60b9f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.581739094Z" level=info msg="Removed container dfa8474a2bcb75dc9e48fe4a9fd1a41cbfbc8d3304281c871b556b0e9107cad0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4/dashboard-metrics-scraper" id=95d81339-d871-482c-a2fc-9d2b81b60b9f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	58deb2a42f9ab       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago       Exited              dashboard-metrics-scraper   3                   72c5aaf380e9a       dashboard-metrics-scraper-6ffb444bf9-7hdg4             kubernetes-dashboard
	aa8fdf86ae37d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           29 seconds ago       Running             storage-provisioner         2                   d96e26e65116c       storage-provisioner                                    kube-system
	f1059e6092955       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   56f579d37fd06       kubernetes-dashboard-855c9754f9-tvbrn                  kubernetes-dashboard
	f6bc238b7f538       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   ecc53ca1e0855       coredns-66bc5c9577-qn68x                               kube-system
	64b3263c4cb93       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   d96e26e65116c       storage-provisioner                                    kube-system
	7b9203ac4a1b0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   134124f5341ee       kindnet-m2tx2                                          kube-system
	77fee27408687       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   08b451a92b435       kube-proxy-vbd99                                       kube-system
	176c3a4be4ff9       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   15ba7aa4ec67e       busybox                                                default
	d68e31f9ddc62       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   0163cc5d4d740       kube-controller-manager-default-k8s-diff-port-455348   kube-system
	9dc424071c1b9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   aa08c9cca997d       kube-scheduler-default-k8s-diff-port-455348            kube-system
	b34e96695557c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   3f4b44fa75940       etcd-default-k8s-diff-port-455348                      kube-system
	e5b09162fcaf4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   1bd3e9eb59281       kube-apiserver-default-k8s-diff-port-455348            kube-system
	
	
	==> coredns [f6bc238b7f538a7f20fc0f48f49813daa4ac28c616e85783e9483bfc32f490fc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58764 - 64695 "HINFO IN 280051824672132967.7513914780674631863. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016045445s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-455348
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-455348
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=default-k8s-diff-port-455348
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T13_17_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 13:17:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-455348
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 13:19:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 13:19:07 +0000   Sun, 19 Oct 2025 13:17:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 13:19:07 +0000   Sun, 19 Oct 2025 13:17:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 13:19:07 +0000   Sun, 19 Oct 2025 13:17:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 13:19:07 +0000   Sun, 19 Oct 2025 13:17:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-455348
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                274325ea-a55a-4ae3-bfda-c03acb1cf740
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 coredns-66bc5c9577-qn68x                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m25s
	  kube-system                 etcd-default-k8s-diff-port-455348                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m30s
	  kube-system                 kindnet-m2tx2                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-455348             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-455348    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-proxy-vbd99                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-455348             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-7hdg4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tvbrn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m23s                  kube-proxy       
	  Normal   Starting                 59s                    kube-proxy       
	  Normal   Starting                 2m38s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m38s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m38s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m38s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m38s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m30s                  kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m30s                  kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m30s                  kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m30s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m26s                  node-controller  Node default-k8s-diff-port-455348 event: Registered Node default-k8s-diff-port-455348 in Controller
	  Normal   NodeReady                104s                   kubelet          Node default-k8s-diff-port-455348 status is now: NodeReady
	  Normal   Starting                 70s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)      kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                    node-controller  Node default-k8s-diff-port-455348 event: Registered Node default-k8s-diff-port-455348 in Controller
	
	
	==> dmesg <==
	[ +11.914063] overlayfs: idmapped layers are currently not supported
	[Oct19 12:57] overlayfs: idmapped layers are currently not supported
	[Oct19 12:58] overlayfs: idmapped layers are currently not supported
	[ +48.481184] overlayfs: idmapped layers are currently not supported
	[Oct19 12:59] overlayfs: idmapped layers are currently not supported
	[Oct19 13:00] overlayfs: idmapped layers are currently not supported
	[Oct19 13:01] overlayfs: idmapped layers are currently not supported
	[Oct19 13:04] overlayfs: idmapped layers are currently not supported
	[Oct19 13:05] overlayfs: idmapped layers are currently not supported
	[Oct19 13:06] overlayfs: idmapped layers are currently not supported
	[Oct19 13:08] overlayfs: idmapped layers are currently not supported
	[ +38.759554] overlayfs: idmapped layers are currently not supported
	[Oct19 13:10] overlayfs: idmapped layers are currently not supported
	[Oct19 13:11] overlayfs: idmapped layers are currently not supported
	[Oct19 13:12] overlayfs: idmapped layers are currently not supported
	[ +39.991818] overlayfs: idmapped layers are currently not supported
	[Oct19 13:13] overlayfs: idmapped layers are currently not supported
	[Oct19 13:14] overlayfs: idmapped layers are currently not supported
	[Oct19 13:15] overlayfs: idmapped layers are currently not supported
	[ +34.413925] overlayfs: idmapped layers are currently not supported
	[Oct19 13:17] overlayfs: idmapped layers are currently not supported
	[ +27.716246] overlayfs: idmapped layers are currently not supported
	[Oct19 13:18] overlayfs: idmapped layers are currently not supported
	[Oct19 13:19] overlayfs: idmapped layers are currently not supported
	[ +25.562956] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b34e96695557c6959cce715a57b32eef60a662626ab95fd5b08a3505f2cfe53a] <==
	{"level":"warn","ts":"2025-10-19T13:18:33.592734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.629777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.678751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.734387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.777223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.803095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.837372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.862343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.876868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.909220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.934760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.978546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.996282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.044635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.070409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.106511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.128290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.152555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.163942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.186881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.232022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.272365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.289787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.386169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.517492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36442","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:19:39 up  3:02,  0 user,  load average: 5.11, 3.92, 3.09
	Linux default-k8s-diff-port-455348 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7b9203ac4a1b0f71c0dd63a1f8c349a569a3ce4f03d54c74eaa8ea2b7fa8839e] <==
	I1019 13:18:38.906244       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:18:38.917417       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 13:18:38.917633       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:18:38.917713       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:18:38.917755       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:18:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:18:39.041884       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:18:39.105769       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:18:39.105879       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:18:39.106412       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 13:19:09.042128       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 13:19:09.106651       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 13:19:09.106780       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 13:19:09.107815       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1019 13:19:10.706942       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 13:19:10.707058       1 metrics.go:72] Registering metrics
	I1019 13:19:10.707148       1 controller.go:711] "Syncing nftables rules"
	I1019 13:19:19.046009       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:19:19.046128       1 main.go:301] handling current node
	I1019 13:19:29.042663       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:19:29.042724       1 main.go:301] handling current node
	I1019 13:19:39.045972       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:19:39.046022       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e5b09162fcaf4578399f5a03831d7d61cf4bfd1901478ea7fed991f19b9f174e] <==
	I1019 13:18:36.498447       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 13:18:36.498528       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 13:18:36.506285       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 13:18:36.506375       1 policy_source.go:240] refreshing policies
	I1019 13:18:36.518249       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 13:18:36.519098       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 13:18:36.519124       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 13:18:36.519242       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 13:18:36.519397       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 13:18:36.520856       1 aggregator.go:171] initial CRD sync complete...
	I1019 13:18:36.520882       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 13:18:36.520890       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 13:18:36.520896       1 cache.go:39] Caches are synced for autoregister controller
	I1019 13:18:36.576289       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 13:18:36.921885       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1019 13:18:36.938254       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 13:18:37.983988       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 13:18:38.118159       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 13:18:38.188137       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 13:18:38.220141       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 13:18:38.478602       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.158.248"}
	I1019 13:18:38.536319       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.184.193"}
	I1019 13:18:41.732307       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 13:18:42.154025       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 13:18:42.235229       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d68e31f9ddc629258adae34a5c4914451d4039479223db3fc89b9ec518005fc0] <==
	I1019 13:18:41.699175       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 13:18:41.699253       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:18:41.699708       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 13:18:41.699753       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 13:18:41.701292       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 13:18:41.705254       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 13:18:41.706741       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 13:18:41.707918       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 13:18:41.709531       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 13:18:41.712776       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 13:18:41.718284       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 13:18:41.724766       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 13:18:41.724815       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 13:18:41.724880       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 13:18:41.724941       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:18:41.724953       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 13:18:41.724960       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 13:18:41.725554       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 13:18:41.726920       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 13:18:41.726971       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 13:18:41.730739       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:18:41.741787       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 13:18:41.746586       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:18:42.258241       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1019 13:18:42.261132       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [77fee27408687abc67ef099c98ed62f58cae326fcb4d0fe2e71f7876a1fa488a] <==
	I1019 13:18:38.659277       1 server_linux.go:53] "Using iptables proxy"
	I1019 13:18:39.176671       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 13:18:39.310052       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 13:18:39.310098       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 13:18:39.310181       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 13:18:39.836267       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:18:39.836352       1 server_linux.go:132] "Using iptables Proxier"
	I1019 13:18:39.950951       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 13:18:39.951443       1 server.go:527] "Version info" version="v1.34.1"
	I1019 13:18:39.951647       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:18:39.952992       1 config.go:200] "Starting service config controller"
	I1019 13:18:39.953056       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 13:18:39.953121       1 config.go:106] "Starting endpoint slice config controller"
	I1019 13:18:39.953157       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 13:18:39.953199       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 13:18:39.953234       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 13:18:39.954092       1 config.go:309] "Starting node config controller"
	I1019 13:18:39.954168       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 13:18:39.954213       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 13:18:40.055489       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 13:18:40.055529       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 13:18:40.055607       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9dc424071c1b92771542bfccd38e435461e8182ac00adb300909438d1cbf9b8f] <==
	I1019 13:18:32.451975       1 serving.go:386] Generated self-signed cert in-memory
	W1019 13:18:36.178408       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 13:18:36.178506       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 13:18:36.178539       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 13:18:36.178580       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 13:18:36.535592       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 13:18:36.535715       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:18:36.585541       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 13:18:36.585707       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:18:36.586985       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:18:36.585722       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 13:18:36.689833       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 13:18:49 default-k8s-diff-port-455348 kubelet[775]: I1019 13:18:49.425363     775 scope.go:117] "RemoveContainer" containerID="6596480724fa779cb64e08b7b57aa1119aac5e154babd7c9d27b8b992ad0af96"
	Oct 19 13:18:50 default-k8s-diff-port-455348 kubelet[775]: I1019 13:18:50.430711     775 scope.go:117] "RemoveContainer" containerID="6596480724fa779cb64e08b7b57aa1119aac5e154babd7c9d27b8b992ad0af96"
	Oct 19 13:18:50 default-k8s-diff-port-455348 kubelet[775]: I1019 13:18:50.430992     775 scope.go:117] "RemoveContainer" containerID="7fb9a36843d5b36479284481530b48398b5d745954401f1590af0523b3ae48be"
	Oct 19 13:18:50 default-k8s-diff-port-455348 kubelet[775]: E1019 13:18:50.431148     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7hdg4_kubernetes-dashboard(7bb5d561-f081-4919-943f-d31f4e5ee4fc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4" podUID="7bb5d561-f081-4919-943f-d31f4e5ee4fc"
	Oct 19 13:18:51 default-k8s-diff-port-455348 kubelet[775]: I1019 13:18:51.435082     775 scope.go:117] "RemoveContainer" containerID="7fb9a36843d5b36479284481530b48398b5d745954401f1590af0523b3ae48be"
	Oct 19 13:18:51 default-k8s-diff-port-455348 kubelet[775]: E1019 13:18:51.435235     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7hdg4_kubernetes-dashboard(7bb5d561-f081-4919-943f-d31f4e5ee4fc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4" podUID="7bb5d561-f081-4919-943f-d31f4e5ee4fc"
	Oct 19 13:18:52 default-k8s-diff-port-455348 kubelet[775]: I1019 13:18:52.438875     775 scope.go:117] "RemoveContainer" containerID="7fb9a36843d5b36479284481530b48398b5d745954401f1590af0523b3ae48be"
	Oct 19 13:18:52 default-k8s-diff-port-455348 kubelet[775]: E1019 13:18:52.439045     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7hdg4_kubernetes-dashboard(7bb5d561-f081-4919-943f-d31f4e5ee4fc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4" podUID="7bb5d561-f081-4919-943f-d31f4e5ee4fc"
	Oct 19 13:19:06 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:06.220510     775 scope.go:117] "RemoveContainer" containerID="7fb9a36843d5b36479284481530b48398b5d745954401f1590af0523b3ae48be"
	Oct 19 13:19:06 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:06.488728     775 scope.go:117] "RemoveContainer" containerID="7fb9a36843d5b36479284481530b48398b5d745954401f1590af0523b3ae48be"
	Oct 19 13:19:06 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:06.490176     775 scope.go:117] "RemoveContainer" containerID="dfa8474a2bcb75dc9e48fe4a9fd1a41cbfbc8d3304281c871b556b0e9107cad0"
	Oct 19 13:19:06 default-k8s-diff-port-455348 kubelet[775]: E1019 13:19:06.490479     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7hdg4_kubernetes-dashboard(7bb5d561-f081-4919-943f-d31f4e5ee4fc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4" podUID="7bb5d561-f081-4919-943f-d31f4e5ee4fc"
	Oct 19 13:19:06 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:06.516717     775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tvbrn" podStartSLOduration=11.210575253 podStartE2EDuration="24.516702617s" podCreationTimestamp="2025-10-19 13:18:42 +0000 UTC" firstStartedPulling="2025-10-19 13:18:42.765124772 +0000 UTC m=+13.846503273" lastFinishedPulling="2025-10-19 13:18:56.071252128 +0000 UTC m=+27.152630637" observedRunningTime="2025-10-19 13:18:56.477821713 +0000 UTC m=+27.559200230" watchObservedRunningTime="2025-10-19 13:19:06.516702617 +0000 UTC m=+37.598081118"
	Oct 19 13:19:09 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:09.506022     775 scope.go:117] "RemoveContainer" containerID="64b3263c4cb9377c973c0405da32ab9f8ae72ae6589d72bc7ad0b1fc5dc41c04"
	Oct 19 13:19:12 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:12.393225     775 scope.go:117] "RemoveContainer" containerID="dfa8474a2bcb75dc9e48fe4a9fd1a41cbfbc8d3304281c871b556b0e9107cad0"
	Oct 19 13:19:12 default-k8s-diff-port-455348 kubelet[775]: E1019 13:19:12.393411     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7hdg4_kubernetes-dashboard(7bb5d561-f081-4919-943f-d31f4e5ee4fc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4" podUID="7bb5d561-f081-4919-943f-d31f4e5ee4fc"
	Oct 19 13:19:27 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:27.222299     775 scope.go:117] "RemoveContainer" containerID="dfa8474a2bcb75dc9e48fe4a9fd1a41cbfbc8d3304281c871b556b0e9107cad0"
	Oct 19 13:19:27 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:27.562946     775 scope.go:117] "RemoveContainer" containerID="dfa8474a2bcb75dc9e48fe4a9fd1a41cbfbc8d3304281c871b556b0e9107cad0"
	Oct 19 13:19:27 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:27.563233     775 scope.go:117] "RemoveContainer" containerID="58deb2a42f9abf760898d192ccbd4c49190875c9116b13743bcd893003255084"
	Oct 19 13:19:27 default-k8s-diff-port-455348 kubelet[775]: E1019 13:19:27.563391     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7hdg4_kubernetes-dashboard(7bb5d561-f081-4919-943f-d31f4e5ee4fc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4" podUID="7bb5d561-f081-4919-943f-d31f4e5ee4fc"
	Oct 19 13:19:32 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:32.393384     775 scope.go:117] "RemoveContainer" containerID="58deb2a42f9abf760898d192ccbd4c49190875c9116b13743bcd893003255084"
	Oct 19 13:19:32 default-k8s-diff-port-455348 kubelet[775]: E1019 13:19:32.394530     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7hdg4_kubernetes-dashboard(7bb5d561-f081-4919-943f-d31f4e5ee4fc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4" podUID="7bb5d561-f081-4919-943f-d31f4e5ee4fc"
	Oct 19 13:19:35 default-k8s-diff-port-455348 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 13:19:36 default-k8s-diff-port-455348 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 13:19:36 default-k8s-diff-port-455348 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f1059e6092955af4f3316486a54cacbf36083e9dda490f278b0fb3ef045f8eb2] <==
	2025/10/19 13:18:56 Using namespace: kubernetes-dashboard
	2025/10/19 13:18:56 Using in-cluster config to connect to apiserver
	2025/10/19 13:18:56 Using secret token for csrf signing
	2025/10/19 13:18:56 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 13:18:56 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 13:18:56 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 13:18:56 Generating JWE encryption key
	2025/10/19 13:18:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 13:18:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 13:18:56 Initializing JWE encryption key from synchronized object
	2025/10/19 13:18:56 Creating in-cluster Sidecar client
	2025/10/19 13:18:56 Serving insecurely on HTTP port: 9090
	2025/10/19 13:18:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 13:19:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 13:18:56 Starting overwatch
	
	
	==> storage-provisioner [64b3263c4cb9377c973c0405da32ab9f8ae72ae6589d72bc7ad0b1fc5dc41c04] <==
	I1019 13:18:39.021409       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 13:19:09.023017       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [aa8fdf86ae37de45052d5f9afe9fd03316efa20075210f8c3437382ef6fb7292] <==
	W1019 13:19:09.624219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:13.079480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:17.340009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:20.938420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:23.992059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:27.015903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:27.021602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 13:19:27.021810       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 13:19:27.021988       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-455348_bddb783d-ca81-4722-950a-c0956362b63b!
	I1019 13:19:27.022659       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b99ed1ce-9305-43d9-afc4-d6b8159429cd", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-455348_bddb783d-ca81-4722-950a-c0956362b63b became leader
	W1019 13:19:27.027074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:27.032707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 13:19:27.122221       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-455348_bddb783d-ca81-4722-950a-c0956362b63b!
	W1019 13:19:29.035971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:29.045472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:31.049501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:31.059462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:33.068604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:33.077149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:35.085997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:35.098474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:37.102426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:37.107476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:39.115712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:39.150257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-455348 -n default-k8s-diff-port-455348
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-455348 -n default-k8s-diff-port-455348: exit status 2 (517.859647ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-455348 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-455348
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-455348:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7",
	        "Created": "2025-10-19T13:16:44.03379204Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496701,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T13:18:21.840756805Z",
	            "FinishedAt": "2025-10-19T13:18:21.001094961Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7/hostname",
	        "HostsPath": "/var/lib/docker/containers/6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7/hosts",
	        "LogPath": "/var/lib/docker/containers/6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7/6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7-json.log",
	        "Name": "/default-k8s-diff-port-455348",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-455348:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-455348",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6519411d3b62538e5e195c08e3014b82901f70ad152792b0c7171626de8e55e7",
	                "LowerDir": "/var/lib/docker/overlay2/69c3312626a00a0a29de39da0ee3edd7eb25e0b33a22ef9214343606d7a497c2-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69c3312626a00a0a29de39da0ee3edd7eb25e0b33a22ef9214343606d7a497c2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69c3312626a00a0a29de39da0ee3edd7eb25e0b33a22ef9214343606d7a497c2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69c3312626a00a0a29de39da0ee3edd7eb25e0b33a22ef9214343606d7a497c2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-455348",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-455348/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-455348",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-455348",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-455348",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "206c6caa27503d1e4d7946d22471664704abd1474ab988c95d0c9f6ae9bd541d",
	            "SandboxKey": "/var/run/docker/netns/206c6caa2750",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-455348": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:f8:1c:40:18:3a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "feb5b6cb71ad4f1814069d9c1fecfa12355d747dd07980e633df65a307f6c04b",
	                    "EndpointID": "dc48b2e24d13c19873e3da1ce5a751a52b1bc85db1269209adc65cc8d0a34b3b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-455348",
	                        "6519411d3b62"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-455348 -n default-k8s-diff-port-455348
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-455348 -n default-k8s-diff-port-455348: exit status 2 (514.577001ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-455348 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-455348 logs -n 25: (1.951488389s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-108149 image list --format=json                                                                                                                                                                                                    │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ pause   │ -p no-preload-108149 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │                     │
	│ delete  │ -p no-preload-108149                                                                                                                                                                                                                          │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ delete  │ -p no-preload-108149                                                                                                                                                                                                                          │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ delete  │ -p disable-driver-mounts-418719                                                                                                                                                                                                               │ disable-driver-mounts-418719 │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ start   │ -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-834340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │                     │
	│ stop    │ -p embed-certs-834340 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-834340 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:17 UTC │
	│ start   │ -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-455348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-455348 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-455348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ start   │ -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:19 UTC │
	│ image   │ embed-certs-834340 image list --format=json                                                                                                                                                                                                   │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ pause   │ -p embed-certs-834340 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │                     │
	│ delete  │ -p embed-certs-834340                                                                                                                                                                                                                         │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ delete  │ -p embed-certs-834340                                                                                                                                                                                                                         │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ start   │ -p newest-cni-895642 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-895642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │                     │
	│ stop    │ -p newest-cni-895642 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │ 19 Oct 25 13:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-895642 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │ 19 Oct 25 13:19 UTC │
	│ start   │ -p newest-cni-895642 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │                     │
	│ image   │ default-k8s-diff-port-455348 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │ 19 Oct 25 13:19 UTC │
	│ pause   │ -p default-k8s-diff-port-455348 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 13:19:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 13:19:25.169345  503186 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:19:25.169574  503186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:19:25.169605  503186 out.go:374] Setting ErrFile to fd 2...
	I1019 13:19:25.169626  503186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:19:25.169968  503186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:19:25.170447  503186 out.go:368] Setting JSON to false
	I1019 13:19:25.171637  503186 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10916,"bootTime":1760869050,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 13:19:25.171742  503186 start.go:141] virtualization:  
	I1019 13:19:25.174991  503186 out.go:179] * [newest-cni-895642] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 13:19:25.179084  503186 notify.go:220] Checking for updates...
	I1019 13:19:25.180010  503186 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:19:25.183043  503186 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:19:25.186046  503186 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:19:25.189047  503186 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 13:19:25.192100  503186 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 13:19:25.195015  503186 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:19:25.198292  503186 config.go:182] Loaded profile config "newest-cni-895642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:19:25.198883  503186 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:19:25.226453  503186 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 13:19:25.226605  503186 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:19:25.295086  503186 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:19:25.278659503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:19:25.295192  503186 docker.go:318] overlay module found
	I1019 13:19:25.298224  503186 out.go:179] * Using the docker driver based on existing profile
	I1019 13:19:25.301721  503186 start.go:305] selected driver: docker
	I1019 13:19:25.301740  503186 start.go:925] validating driver "docker" against &{Name:newest-cni-895642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:19:25.301843  503186 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:19:25.302559  503186 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:19:25.357591  503186 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:19:25.348074344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:19:25.357979  503186 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 13:19:25.358017  503186 cni.go:84] Creating CNI manager for ""
	I1019 13:19:25.358086  503186 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:19:25.358136  503186 start.go:349] cluster config:
	{Name:newest-cni-895642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:19:25.361457  503186 out.go:179] * Starting "newest-cni-895642" primary control-plane node in "newest-cni-895642" cluster
	I1019 13:19:25.364282  503186 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 13:19:25.367200  503186 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 13:19:25.370018  503186 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:19:25.370106  503186 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 13:19:25.370116  503186 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 13:19:25.370138  503186 cache.go:58] Caching tarball of preloaded images
	I1019 13:19:25.370227  503186 preload.go:233] Found /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 13:19:25.370241  503186 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 13:19:25.370363  503186 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/config.json ...
	I1019 13:19:25.389423  503186 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 13:19:25.389445  503186 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 13:19:25.389464  503186 cache.go:232] Successfully downloaded all kic artifacts
	I1019 13:19:25.389487  503186 start.go:360] acquireMachinesLock for newest-cni-895642: {Name:mke5c4230882c7c86983f0da461147450e8e886d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:19:25.389556  503186 start.go:364] duration metric: took 46.253µs to acquireMachinesLock for "newest-cni-895642"
	I1019 13:19:25.389579  503186 start.go:96] Skipping create...Using existing machine configuration
	I1019 13:19:25.389586  503186 fix.go:54] fixHost starting: 
	I1019 13:19:25.389918  503186 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:25.406454  503186 fix.go:112] recreateIfNeeded on newest-cni-895642: state=Stopped err=<nil>
	W1019 13:19:25.406489  503186 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 13:19:25.409740  503186 out.go:252] * Restarting existing docker container for "newest-cni-895642" ...
	I1019 13:19:25.409823  503186 cli_runner.go:164] Run: docker start newest-cni-895642
	I1019 13:19:25.672026  503186 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:25.703460  503186 kic.go:430] container "newest-cni-895642" state is running.
	I1019 13:19:25.704103  503186 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895642
	I1019 13:19:25.727547  503186 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/config.json ...
	I1019 13:19:25.727779  503186 machine.go:93] provisionDockerMachine start ...
	I1019 13:19:25.727860  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:25.756624  503186 main.go:141] libmachine: Using SSH client type: native
	I1019 13:19:25.757520  503186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1019 13:19:25.757541  503186 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 13:19:25.758657  503186 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1019 13:19:28.913337  503186 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-895642
	
	I1019 13:19:28.913369  503186 ubuntu.go:182] provisioning hostname "newest-cni-895642"
	I1019 13:19:28.913434  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:28.933553  503186 main.go:141] libmachine: Using SSH client type: native
	I1019 13:19:28.934046  503186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1019 13:19:28.934066  503186 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-895642 && echo "newest-cni-895642" | sudo tee /etc/hostname
	I1019 13:19:29.099311  503186 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-895642
	
	I1019 13:19:29.099432  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:29.119806  503186 main.go:141] libmachine: Using SSH client type: native
	I1019 13:19:29.120136  503186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1019 13:19:29.120158  503186 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-895642' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-895642/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-895642' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 13:19:29.277884  503186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 13:19:29.277914  503186 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 13:19:29.277946  503186 ubuntu.go:190] setting up certificates
	I1019 13:19:29.277961  503186 provision.go:84] configureAuth start
	I1019 13:19:29.278034  503186 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895642
	I1019 13:19:29.301808  503186 provision.go:143] copyHostCerts
	I1019 13:19:29.301873  503186 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem, removing ...
	I1019 13:19:29.301892  503186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem
	I1019 13:19:29.301967  503186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 13:19:29.302085  503186 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem, removing ...
	I1019 13:19:29.302090  503186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem
	I1019 13:19:29.302117  503186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 13:19:29.302199  503186 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem, removing ...
	I1019 13:19:29.302205  503186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem
	I1019 13:19:29.302233  503186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 13:19:29.302290  503186 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.newest-cni-895642 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-895642]
	I1019 13:19:29.374167  503186 provision.go:177] copyRemoteCerts
	I1019 13:19:29.374259  503186 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 13:19:29.374318  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:29.391140  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:29.493830  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 13:19:29.514390  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 13:19:29.532916  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 13:19:29.550713  503186 provision.go:87] duration metric: took 272.732509ms to configureAuth
	I1019 13:19:29.550741  503186 ubuntu.go:206] setting minikube options for container-runtime
	I1019 13:19:29.550946  503186 config.go:182] Loaded profile config "newest-cni-895642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:19:29.551070  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:29.569921  503186 main.go:141] libmachine: Using SSH client type: native
	I1019 13:19:29.570253  503186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1019 13:19:29.570274  503186 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 13:19:29.873306  503186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 13:19:29.873332  503186 machine.go:96] duration metric: took 4.145535815s to provisionDockerMachine
	I1019 13:19:29.873352  503186 start.go:293] postStartSetup for "newest-cni-895642" (driver="docker")
	I1019 13:19:29.873364  503186 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 13:19:29.873444  503186 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 13:19:29.873490  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:29.890148  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:29.997258  503186 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 13:19:30.002593  503186 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 13:19:30.002644  503186 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 13:19:30.002659  503186 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/addons for local assets ...
	I1019 13:19:30.002738  503186 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/files for local assets ...
	I1019 13:19:30.002829  503186 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem -> 2945182.pem in /etc/ssl/certs
	I1019 13:19:30.002936  503186 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 13:19:30.029850  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:19:30.066991  503186 start.go:296] duration metric: took 193.620206ms for postStartSetup
	I1019 13:19:30.067112  503186 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 13:19:30.067248  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:30.088223  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:30.191529  503186 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 13:19:30.196369  503186 fix.go:56] duration metric: took 4.806775977s for fixHost
	I1019 13:19:30.196395  503186 start.go:83] releasing machines lock for "newest-cni-895642", held for 4.806827736s
	I1019 13:19:30.196471  503186 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895642
	I1019 13:19:30.214998  503186 ssh_runner.go:195] Run: cat /version.json
	I1019 13:19:30.215056  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:30.215139  503186 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 13:19:30.215199  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:30.240016  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:30.241564  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:30.346049  503186 ssh_runner.go:195] Run: systemctl --version
	I1019 13:19:30.440967  503186 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 13:19:30.476294  503186 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 13:19:30.480767  503186 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 13:19:30.480880  503186 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 13:19:30.488567  503186 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 13:19:30.488602  503186 start.go:495] detecting cgroup driver to use...
	I1019 13:19:30.488634  503186 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 13:19:30.488699  503186 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 13:19:30.504768  503186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 13:19:30.517613  503186 docker.go:218] disabling cri-docker service (if available) ...
	I1019 13:19:30.517744  503186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 13:19:30.534697  503186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 13:19:30.547999  503186 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 13:19:30.666826  503186 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 13:19:30.789564  503186 docker.go:234] disabling docker service ...
	I1019 13:19:30.789718  503186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 13:19:30.805667  503186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 13:19:30.827277  503186 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 13:19:30.950983  503186 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 13:19:31.080274  503186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 13:19:31.095662  503186 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 13:19:31.111621  503186 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 13:19:31.111694  503186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.122130  503186 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 13:19:31.122227  503186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.132706  503186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.142968  503186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.152846  503186 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 13:19:31.161851  503186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.171479  503186 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.180553  503186 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.190292  503186 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 13:19:31.198459  503186 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 13:19:31.205996  503186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:19:31.330350  503186 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 13:19:31.465643  503186 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 13:19:31.465758  503186 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 13:19:31.469621  503186 start.go:563] Will wait 60s for crictl version
	I1019 13:19:31.469847  503186 ssh_runner.go:195] Run: which crictl
	I1019 13:19:31.473844  503186 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 13:19:31.498952  503186 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 13:19:31.499106  503186 ssh_runner.go:195] Run: crio --version
	I1019 13:19:31.528942  503186 ssh_runner.go:195] Run: crio --version
	I1019 13:19:31.561590  503186 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 13:19:31.564368  503186 cli_runner.go:164] Run: docker network inspect newest-cni-895642 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:19:31.581115  503186 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 13:19:31.584948  503186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:19:31.597969  503186 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1019 13:19:31.600776  503186 kubeadm.go:883] updating cluster {Name:newest-cni-895642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 13:19:31.600918  503186 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:19:31.601013  503186 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:19:31.635369  503186 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:19:31.635393  503186 crio.go:433] Images already preloaded, skipping extraction
	I1019 13:19:31.635446  503186 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:19:31.662206  503186 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:19:31.662228  503186 cache_images.go:85] Images are preloaded, skipping loading
	I1019 13:19:31.662251  503186 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1019 13:19:31.662400  503186 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-895642 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 13:19:31.662503  503186 ssh_runner.go:195] Run: crio config
	I1019 13:19:31.740719  503186 cni.go:84] Creating CNI manager for ""
	I1019 13:19:31.740744  503186 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:19:31.740791  503186 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1019 13:19:31.740824  503186 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-895642 NodeName:newest-cni-895642 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 13:19:31.740960  503186 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-895642"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 13:19:31.741033  503186 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 13:19:31.749166  503186 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 13:19:31.749265  503186 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 13:19:31.757972  503186 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 13:19:31.772492  503186 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 13:19:31.785933  503186 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1019 13:19:31.799868  503186 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 13:19:31.803838  503186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:19:31.813960  503186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:19:31.927165  503186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:19:31.943424  503186 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642 for IP: 192.168.85.2
	I1019 13:19:31.943448  503186 certs.go:195] generating shared ca certs ...
	I1019 13:19:31.943464  503186 certs.go:227] acquiring lock for ca certs: {Name:mk8f2f1c683cf5104ef70f6f3d59bf8f6240d633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:19:31.943596  503186 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key
	I1019 13:19:31.943651  503186 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key
	I1019 13:19:31.943663  503186 certs.go:257] generating profile certs ...
	I1019 13:19:31.943751  503186 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/client.key
	I1019 13:19:31.943815  503186 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.key.d4125fb8
	I1019 13:19:31.943857  503186 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/proxy-client.key
	I1019 13:19:31.943986  503186 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem (1338 bytes)
	W1019 13:19:31.944020  503186 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518_empty.pem, impossibly tiny 0 bytes
	I1019 13:19:31.944033  503186 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 13:19:31.944067  503186 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem (1082 bytes)
	I1019 13:19:31.944096  503186 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem (1123 bytes)
	I1019 13:19:31.944123  503186 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem (1679 bytes)
	I1019 13:19:31.944168  503186 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:19:31.944773  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 13:19:31.968205  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 13:19:31.988543  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 13:19:32.011799  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 13:19:32.034572  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 13:19:32.058346  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 13:19:32.081243  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 13:19:32.107781  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 13:19:32.139561  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /usr/share/ca-certificates/2945182.pem (1708 bytes)
	I1019 13:19:32.167240  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 13:19:32.190650  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem --> /usr/share/ca-certificates/294518.pem (1338 bytes)
	I1019 13:19:32.210444  503186 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 13:19:32.225354  503186 ssh_runner.go:195] Run: openssl version
	I1019 13:19:32.231708  503186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 13:19:32.240811  503186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:19:32.244713  503186 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:19:32.244786  503186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:19:32.294059  503186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 13:19:32.302549  503186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294518.pem && ln -fs /usr/share/ca-certificates/294518.pem /etc/ssl/certs/294518.pem"
	I1019 13:19:32.312139  503186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294518.pem
	I1019 13:19:32.315693  503186 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:20 /usr/share/ca-certificates/294518.pem
	I1019 13:19:32.315781  503186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294518.pem
	I1019 13:19:32.356958  503186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294518.pem /etc/ssl/certs/51391683.0"
	I1019 13:19:32.365153  503186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2945182.pem && ln -fs /usr/share/ca-certificates/2945182.pem /etc/ssl/certs/2945182.pem"
	I1019 13:19:32.373578  503186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2945182.pem
	I1019 13:19:32.377296  503186 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:20 /usr/share/ca-certificates/2945182.pem
	I1019 13:19:32.377391  503186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2945182.pem
	I1019 13:19:32.421016  503186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2945182.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 13:19:32.429783  503186 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 13:19:32.433636  503186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 13:19:32.475552  503186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 13:19:32.516702  503186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 13:19:32.557815  503186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 13:19:32.600480  503186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 13:19:32.650589  503186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 13:19:32.714948  503186 kubeadm.go:400] StartCluster: {Name:newest-cni-895642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:19:32.715039  503186 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 13:19:32.715138  503186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 13:19:32.785785  503186 cri.go:89] found id: "df7751d1304bdecb2f8c2da9564eb9648edb59cf776486a8eab0e66763b2a99a"
	I1019 13:19:32.785808  503186 cri.go:89] found id: ""
	I1019 13:19:32.785895  503186 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 13:19:32.812260  503186 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:19:32Z" level=error msg="open /run/runc: no such file or directory"
	I1019 13:19:32.812389  503186 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 13:19:32.834629  503186 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 13:19:32.834659  503186 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 13:19:32.834752  503186 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 13:19:32.860259  503186 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 13:19:32.860861  503186 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-895642" does not appear in /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:19:32.861329  503186 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-292654/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-895642" cluster setting kubeconfig missing "newest-cni-895642" context setting]
	I1019 13:19:32.862441  503186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:19:32.866956  503186 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 13:19:32.894204  503186 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 13:19:32.894240  503186 kubeadm.go:601] duration metric: took 59.575017ms to restartPrimaryControlPlane
	I1019 13:19:32.894250  503186 kubeadm.go:402] duration metric: took 179.312154ms to StartCluster
	I1019 13:19:32.894265  503186 settings.go:142] acquiring lock: {Name:mk1099ab6cbf86eca031b5f8e2b43952c9c0f84f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:19:32.894332  503186 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:19:32.895329  503186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:19:32.895543  503186 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:19:32.895955  503186 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 13:19:32.896047  503186 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-895642"
	I1019 13:19:32.896069  503186 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-895642"
	W1019 13:19:32.896075  503186 addons.go:247] addon storage-provisioner should already be in state true
	I1019 13:19:32.896106  503186 host.go:66] Checking if "newest-cni-895642" exists ...
	I1019 13:19:32.896714  503186 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:32.896881  503186 config.go:182] Loaded profile config "newest-cni-895642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:19:32.896933  503186 addons.go:69] Setting dashboard=true in profile "newest-cni-895642"
	I1019 13:19:32.896944  503186 addons.go:238] Setting addon dashboard=true in "newest-cni-895642"
	W1019 13:19:32.896950  503186 addons.go:247] addon dashboard should already be in state true
	I1019 13:19:32.896967  503186 host.go:66] Checking if "newest-cni-895642" exists ...
	I1019 13:19:32.897398  503186 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:32.898253  503186 addons.go:69] Setting default-storageclass=true in profile "newest-cni-895642"
	I1019 13:19:32.898321  503186 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-895642"
	I1019 13:19:32.898658  503186 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:32.899976  503186 out.go:179] * Verifying Kubernetes components...
	I1019 13:19:32.903268  503186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:19:32.955871  503186 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 13:19:32.958895  503186 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:19:32.958923  503186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 13:19:32.958989  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:32.959467  503186 addons.go:238] Setting addon default-storageclass=true in "newest-cni-895642"
	W1019 13:19:32.959482  503186 addons.go:247] addon default-storageclass should already be in state true
	I1019 13:19:32.959506  503186 host.go:66] Checking if "newest-cni-895642" exists ...
	I1019 13:19:32.959935  503186 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:32.977658  503186 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 13:19:32.982573  503186 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 13:19:32.985533  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 13:19:32.985564  503186 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 13:19:32.985636  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:33.020700  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:33.045991  503186 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 13:19:33.046014  503186 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 13:19:33.046082  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:33.055558  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:33.097899  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:33.300589  503186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:19:33.323301  503186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:19:33.338409  503186 api_server.go:52] waiting for apiserver process to appear ...
	I1019 13:19:33.338537  503186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 13:19:33.373599  503186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 13:19:33.399805  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 13:19:33.399871  503186 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 13:19:33.446692  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 13:19:33.446720  503186 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 13:19:33.535860  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 13:19:33.535897  503186 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 13:19:33.598982  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 13:19:33.599016  503186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 13:19:33.632171  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 13:19:33.632199  503186 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 13:19:33.651270  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 13:19:33.651296  503186 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 13:19:33.664849  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 13:19:33.664876  503186 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 13:19:33.678940  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 13:19:33.678961  503186 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 13:19:33.692305  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 13:19:33.692329  503186 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 13:19:33.711585  503186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.050339336Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.053802784Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.053962713Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.054030242Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.057270253Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.057469378Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.05754581Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.063745468Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.063797596Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.063823598Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.068167625Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 13:19:19 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:19.068220278Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.222697714Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2b767052-04e2-41f3-b7c3-e9ccdfdd59fc name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.223636476Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d45ae456-48d8-4831-aa6f-78e1edd12404 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.224661056Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4/dashboard-metrics-scraper" id=70bcce43-10ce-43a8-b2e5-a1df6761bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.224922713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.232296254Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.232968435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.251579496Z" level=info msg="Created container 58deb2a42f9abf760898d192ccbd4c49190875c9116b13743bcd893003255084: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4/dashboard-metrics-scraper" id=70bcce43-10ce-43a8-b2e5-a1df6761bda5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.252511759Z" level=info msg="Starting container: 58deb2a42f9abf760898d192ccbd4c49190875c9116b13743bcd893003255084" id=d50306c8-8545-4524-b528-a182eb25b730 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.254717934Z" level=info msg="Started container" PID=1724 containerID=58deb2a42f9abf760898d192ccbd4c49190875c9116b13743bcd893003255084 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4/dashboard-metrics-scraper id=d50306c8-8545-4524-b528-a182eb25b730 name=/runtime.v1.RuntimeService/StartContainer sandboxID=72c5aaf380e9a01c6324ba887e17b66a16c34b282a5b7dc92102e6716fee0dc4
	Oct 19 13:19:27 default-k8s-diff-port-455348 conmon[1722]: conmon 58deb2a42f9abf760898 <ninfo>: container 1724 exited with status 1
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.566427238Z" level=info msg="Removing container: dfa8474a2bcb75dc9e48fe4a9fd1a41cbfbc8d3304281c871b556b0e9107cad0" id=95d81339-d871-482c-a2fc-9d2b81b60b9f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.574789281Z" level=info msg="Error loading conmon cgroup of container dfa8474a2bcb75dc9e48fe4a9fd1a41cbfbc8d3304281c871b556b0e9107cad0: cgroup deleted" id=95d81339-d871-482c-a2fc-9d2b81b60b9f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 13:19:27 default-k8s-diff-port-455348 crio[647]: time="2025-10-19T13:19:27.581739094Z" level=info msg="Removed container dfa8474a2bcb75dc9e48fe4a9fd1a41cbfbc8d3304281c871b556b0e9107cad0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4/dashboard-metrics-scraper" id=95d81339-d871-482c-a2fc-9d2b81b60b9f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	58deb2a42f9ab       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago       Exited              dashboard-metrics-scraper   3                   72c5aaf380e9a       dashboard-metrics-scraper-6ffb444bf9-7hdg4             kubernetes-dashboard
	aa8fdf86ae37d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           32 seconds ago       Running             storage-provisioner         2                   d96e26e65116c       storage-provisioner                                    kube-system
	f1059e6092955       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago       Running             kubernetes-dashboard        0                   56f579d37fd06       kubernetes-dashboard-855c9754f9-tvbrn                  kubernetes-dashboard
	f6bc238b7f538       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   ecc53ca1e0855       coredns-66bc5c9577-qn68x                               kube-system
	64b3263c4cb93       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   d96e26e65116c       storage-provisioner                                    kube-system
	7b9203ac4a1b0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   134124f5341ee       kindnet-m2tx2                                          kube-system
	77fee27408687       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   08b451a92b435       kube-proxy-vbd99                                       kube-system
	176c3a4be4ff9       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   15ba7aa4ec67e       busybox                                                default
	d68e31f9ddc62       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   0163cc5d4d740       kube-controller-manager-default-k8s-diff-port-455348   kube-system
	9dc424071c1b9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   aa08c9cca997d       kube-scheduler-default-k8s-diff-port-455348            kube-system
	b34e96695557c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   3f4b44fa75940       etcd-default-k8s-diff-port-455348                      kube-system
	e5b09162fcaf4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   1bd3e9eb59281       kube-apiserver-default-k8s-diff-port-455348            kube-system
	
	
	==> coredns [f6bc238b7f538a7f20fc0f48f49813daa4ac28c616e85783e9483bfc32f490fc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58764 - 64695 "HINFO IN 280051824672132967.7513914780674631863. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016045445s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-455348
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-455348
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=default-k8s-diff-port-455348
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T13_17_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 13:17:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-455348
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 13:19:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 13:19:07 +0000   Sun, 19 Oct 2025 13:17:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 13:19:07 +0000   Sun, 19 Oct 2025 13:17:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 13:19:07 +0000   Sun, 19 Oct 2025 13:17:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 13:19:07 +0000   Sun, 19 Oct 2025 13:17:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-455348
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                274325ea-a55a-4ae3-bfda-c03acb1cf740
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 coredns-66bc5c9577-qn68x                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m28s
	  kube-system                 etcd-default-k8s-diff-port-455348                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m33s
	  kube-system                 kindnet-m2tx2                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m28s
	  kube-system                 kube-apiserver-default-k8s-diff-port-455348             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-455348    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-proxy-vbd99                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-scheduler-default-k8s-diff-port-455348             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-7hdg4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tvbrn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m27s                  kube-proxy       
	  Normal   Starting                 62s                    kube-proxy       
	  Normal   Starting                 2m41s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m41s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m41s (x8 over 2m41s)  kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m41s (x8 over 2m41s)  kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m41s (x8 over 2m41s)  kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m33s                  kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m33s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m33s                  kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m33s                  kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m33s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m29s                  node-controller  Node default-k8s-diff-port-455348 event: Registered Node default-k8s-diff-port-455348 in Controller
	  Normal   NodeReady                107s                   kubelet          Node default-k8s-diff-port-455348 status is now: NodeReady
	  Normal   Starting                 73s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 73s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  73s (x8 over 73s)      kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 73s)      kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x8 over 73s)      kubelet          Node default-k8s-diff-port-455348 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           61s                    node-controller  Node default-k8s-diff-port-455348 event: Registered Node default-k8s-diff-port-455348 in Controller
	
	
	==> dmesg <==
	[ +11.914063] overlayfs: idmapped layers are currently not supported
	[Oct19 12:57] overlayfs: idmapped layers are currently not supported
	[Oct19 12:58] overlayfs: idmapped layers are currently not supported
	[ +48.481184] overlayfs: idmapped layers are currently not supported
	[Oct19 12:59] overlayfs: idmapped layers are currently not supported
	[Oct19 13:00] overlayfs: idmapped layers are currently not supported
	[Oct19 13:01] overlayfs: idmapped layers are currently not supported
	[Oct19 13:04] overlayfs: idmapped layers are currently not supported
	[Oct19 13:05] overlayfs: idmapped layers are currently not supported
	[Oct19 13:06] overlayfs: idmapped layers are currently not supported
	[Oct19 13:08] overlayfs: idmapped layers are currently not supported
	[ +38.759554] overlayfs: idmapped layers are currently not supported
	[Oct19 13:10] overlayfs: idmapped layers are currently not supported
	[Oct19 13:11] overlayfs: idmapped layers are currently not supported
	[Oct19 13:12] overlayfs: idmapped layers are currently not supported
	[ +39.991818] overlayfs: idmapped layers are currently not supported
	[Oct19 13:13] overlayfs: idmapped layers are currently not supported
	[Oct19 13:14] overlayfs: idmapped layers are currently not supported
	[Oct19 13:15] overlayfs: idmapped layers are currently not supported
	[ +34.413925] overlayfs: idmapped layers are currently not supported
	[Oct19 13:17] overlayfs: idmapped layers are currently not supported
	[ +27.716246] overlayfs: idmapped layers are currently not supported
	[Oct19 13:18] overlayfs: idmapped layers are currently not supported
	[Oct19 13:19] overlayfs: idmapped layers are currently not supported
	[ +25.562956] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b34e96695557c6959cce715a57b32eef60a662626ab95fd5b08a3505f2cfe53a] <==
	{"level":"warn","ts":"2025-10-19T13:18:33.592734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.629777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.678751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.734387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.777223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.803095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.837372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.862343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.876868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.909220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.934760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.978546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:33.996282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.044635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.070409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.106511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.128290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.152555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.163942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.186881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.232022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.272365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.289787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.386169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:18:34.517492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36442","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:19:42 up  3:02,  0 user,  load average: 5.11, 3.92, 3.09
	Linux default-k8s-diff-port-455348 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7b9203ac4a1b0f71c0dd63a1f8c349a569a3ce4f03d54c74eaa8ea2b7fa8839e] <==
	I1019 13:18:38.906244       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:18:38.917417       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 13:18:38.917633       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:18:38.917713       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:18:38.917755       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:18:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:18:39.041884       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:18:39.105769       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:18:39.105879       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:18:39.106412       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 13:19:09.042128       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 13:19:09.106651       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 13:19:09.106780       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 13:19:09.107815       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1019 13:19:10.706942       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 13:19:10.707058       1 metrics.go:72] Registering metrics
	I1019 13:19:10.707148       1 controller.go:711] "Syncing nftables rules"
	I1019 13:19:19.046009       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:19:19.046128       1 main.go:301] handling current node
	I1019 13:19:29.042663       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:19:29.042724       1 main.go:301] handling current node
	I1019 13:19:39.045972       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 13:19:39.046022       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e5b09162fcaf4578399f5a03831d7d61cf4bfd1901478ea7fed991f19b9f174e] <==
	I1019 13:18:36.498447       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 13:18:36.498528       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 13:18:36.506285       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 13:18:36.506375       1 policy_source.go:240] refreshing policies
	I1019 13:18:36.518249       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 13:18:36.519098       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 13:18:36.519124       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 13:18:36.519242       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 13:18:36.519397       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 13:18:36.520856       1 aggregator.go:171] initial CRD sync complete...
	I1019 13:18:36.520882       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 13:18:36.520890       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 13:18:36.520896       1 cache.go:39] Caches are synced for autoregister controller
	I1019 13:18:36.576289       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 13:18:36.921885       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1019 13:18:36.938254       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 13:18:37.983988       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 13:18:38.118159       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 13:18:38.188137       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 13:18:38.220141       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 13:18:38.478602       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.158.248"}
	I1019 13:18:38.536319       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.184.193"}
	I1019 13:18:41.732307       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 13:18:42.154025       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 13:18:42.235229       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d68e31f9ddc629258adae34a5c4914451d4039479223db3fc89b9ec518005fc0] <==
	I1019 13:18:41.699175       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 13:18:41.699253       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:18:41.699708       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 13:18:41.699753       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 13:18:41.701292       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 13:18:41.705254       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 13:18:41.706741       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 13:18:41.707918       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 13:18:41.709531       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 13:18:41.712776       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 13:18:41.718284       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 13:18:41.724766       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 13:18:41.724815       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 13:18:41.724880       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 13:18:41.724941       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:18:41.724953       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 13:18:41.724960       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 13:18:41.725554       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 13:18:41.726920       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 13:18:41.726971       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 13:18:41.730739       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:18:41.741787       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 13:18:41.746586       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:18:42.258241       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1019 13:18:42.261132       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [77fee27408687abc67ef099c98ed62f58cae326fcb4d0fe2e71f7876a1fa488a] <==
	I1019 13:18:38.659277       1 server_linux.go:53] "Using iptables proxy"
	I1019 13:18:39.176671       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 13:18:39.310052       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 13:18:39.310098       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 13:18:39.310181       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 13:18:39.836267       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:18:39.836352       1 server_linux.go:132] "Using iptables Proxier"
	I1019 13:18:39.950951       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 13:18:39.951443       1 server.go:527] "Version info" version="v1.34.1"
	I1019 13:18:39.951647       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:18:39.952992       1 config.go:200] "Starting service config controller"
	I1019 13:18:39.953056       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 13:18:39.953121       1 config.go:106] "Starting endpoint slice config controller"
	I1019 13:18:39.953157       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 13:18:39.953199       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 13:18:39.953234       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 13:18:39.954092       1 config.go:309] "Starting node config controller"
	I1019 13:18:39.954168       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 13:18:39.954213       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 13:18:40.055489       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 13:18:40.055529       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 13:18:40.055607       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9dc424071c1b92771542bfccd38e435461e8182ac00adb300909438d1cbf9b8f] <==
	I1019 13:18:32.451975       1 serving.go:386] Generated self-signed cert in-memory
	W1019 13:18:36.178408       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 13:18:36.178506       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 13:18:36.178539       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 13:18:36.178580       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 13:18:36.535592       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 13:18:36.535715       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:18:36.585541       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 13:18:36.585707       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:18:36.586985       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:18:36.585722       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 13:18:36.689833       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 13:18:49 default-k8s-diff-port-455348 kubelet[775]: I1019 13:18:49.425363     775 scope.go:117] "RemoveContainer" containerID="6596480724fa779cb64e08b7b57aa1119aac5e154babd7c9d27b8b992ad0af96"
	Oct 19 13:18:50 default-k8s-diff-port-455348 kubelet[775]: I1019 13:18:50.430711     775 scope.go:117] "RemoveContainer" containerID="6596480724fa779cb64e08b7b57aa1119aac5e154babd7c9d27b8b992ad0af96"
	Oct 19 13:18:50 default-k8s-diff-port-455348 kubelet[775]: I1019 13:18:50.430992     775 scope.go:117] "RemoveContainer" containerID="7fb9a36843d5b36479284481530b48398b5d745954401f1590af0523b3ae48be"
	Oct 19 13:18:50 default-k8s-diff-port-455348 kubelet[775]: E1019 13:18:50.431148     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7hdg4_kubernetes-dashboard(7bb5d561-f081-4919-943f-d31f4e5ee4fc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4" podUID="7bb5d561-f081-4919-943f-d31f4e5ee4fc"
	Oct 19 13:18:51 default-k8s-diff-port-455348 kubelet[775]: I1019 13:18:51.435082     775 scope.go:117] "RemoveContainer" containerID="7fb9a36843d5b36479284481530b48398b5d745954401f1590af0523b3ae48be"
	Oct 19 13:18:51 default-k8s-diff-port-455348 kubelet[775]: E1019 13:18:51.435235     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7hdg4_kubernetes-dashboard(7bb5d561-f081-4919-943f-d31f4e5ee4fc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4" podUID="7bb5d561-f081-4919-943f-d31f4e5ee4fc"
	Oct 19 13:18:52 default-k8s-diff-port-455348 kubelet[775]: I1019 13:18:52.438875     775 scope.go:117] "RemoveContainer" containerID="7fb9a36843d5b36479284481530b48398b5d745954401f1590af0523b3ae48be"
	Oct 19 13:18:52 default-k8s-diff-port-455348 kubelet[775]: E1019 13:18:52.439045     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7hdg4_kubernetes-dashboard(7bb5d561-f081-4919-943f-d31f4e5ee4fc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4" podUID="7bb5d561-f081-4919-943f-d31f4e5ee4fc"
	Oct 19 13:19:06 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:06.220510     775 scope.go:117] "RemoveContainer" containerID="7fb9a36843d5b36479284481530b48398b5d745954401f1590af0523b3ae48be"
	Oct 19 13:19:06 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:06.488728     775 scope.go:117] "RemoveContainer" containerID="7fb9a36843d5b36479284481530b48398b5d745954401f1590af0523b3ae48be"
	Oct 19 13:19:06 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:06.490176     775 scope.go:117] "RemoveContainer" containerID="dfa8474a2bcb75dc9e48fe4a9fd1a41cbfbc8d3304281c871b556b0e9107cad0"
	Oct 19 13:19:06 default-k8s-diff-port-455348 kubelet[775]: E1019 13:19:06.490479     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7hdg4_kubernetes-dashboard(7bb5d561-f081-4919-943f-d31f4e5ee4fc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4" podUID="7bb5d561-f081-4919-943f-d31f4e5ee4fc"
	Oct 19 13:19:06 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:06.516717     775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tvbrn" podStartSLOduration=11.210575253 podStartE2EDuration="24.516702617s" podCreationTimestamp="2025-10-19 13:18:42 +0000 UTC" firstStartedPulling="2025-10-19 13:18:42.765124772 +0000 UTC m=+13.846503273" lastFinishedPulling="2025-10-19 13:18:56.071252128 +0000 UTC m=+27.152630637" observedRunningTime="2025-10-19 13:18:56.477821713 +0000 UTC m=+27.559200230" watchObservedRunningTime="2025-10-19 13:19:06.516702617 +0000 UTC m=+37.598081118"
	Oct 19 13:19:09 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:09.506022     775 scope.go:117] "RemoveContainer" containerID="64b3263c4cb9377c973c0405da32ab9f8ae72ae6589d72bc7ad0b1fc5dc41c04"
	Oct 19 13:19:12 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:12.393225     775 scope.go:117] "RemoveContainer" containerID="dfa8474a2bcb75dc9e48fe4a9fd1a41cbfbc8d3304281c871b556b0e9107cad0"
	Oct 19 13:19:12 default-k8s-diff-port-455348 kubelet[775]: E1019 13:19:12.393411     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7hdg4_kubernetes-dashboard(7bb5d561-f081-4919-943f-d31f4e5ee4fc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4" podUID="7bb5d561-f081-4919-943f-d31f4e5ee4fc"
	Oct 19 13:19:27 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:27.222299     775 scope.go:117] "RemoveContainer" containerID="dfa8474a2bcb75dc9e48fe4a9fd1a41cbfbc8d3304281c871b556b0e9107cad0"
	Oct 19 13:19:27 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:27.562946     775 scope.go:117] "RemoveContainer" containerID="dfa8474a2bcb75dc9e48fe4a9fd1a41cbfbc8d3304281c871b556b0e9107cad0"
	Oct 19 13:19:27 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:27.563233     775 scope.go:117] "RemoveContainer" containerID="58deb2a42f9abf760898d192ccbd4c49190875c9116b13743bcd893003255084"
	Oct 19 13:19:27 default-k8s-diff-port-455348 kubelet[775]: E1019 13:19:27.563391     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7hdg4_kubernetes-dashboard(7bb5d561-f081-4919-943f-d31f4e5ee4fc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4" podUID="7bb5d561-f081-4919-943f-d31f4e5ee4fc"
	Oct 19 13:19:32 default-k8s-diff-port-455348 kubelet[775]: I1019 13:19:32.393384     775 scope.go:117] "RemoveContainer" containerID="58deb2a42f9abf760898d192ccbd4c49190875c9116b13743bcd893003255084"
	Oct 19 13:19:32 default-k8s-diff-port-455348 kubelet[775]: E1019 13:19:32.394530     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7hdg4_kubernetes-dashboard(7bb5d561-f081-4919-943f-d31f4e5ee4fc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7hdg4" podUID="7bb5d561-f081-4919-943f-d31f4e5ee4fc"
	Oct 19 13:19:35 default-k8s-diff-port-455348 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 13:19:36 default-k8s-diff-port-455348 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 13:19:36 default-k8s-diff-port-455348 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f1059e6092955af4f3316486a54cacbf36083e9dda490f278b0fb3ef045f8eb2] <==
	2025/10/19 13:18:56 Using namespace: kubernetes-dashboard
	2025/10/19 13:18:56 Using in-cluster config to connect to apiserver
	2025/10/19 13:18:56 Using secret token for csrf signing
	2025/10/19 13:18:56 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 13:18:56 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 13:18:56 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 13:18:56 Generating JWE encryption key
	2025/10/19 13:18:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 13:18:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 13:18:56 Initializing JWE encryption key from synchronized object
	2025/10/19 13:18:56 Creating in-cluster Sidecar client
	2025/10/19 13:18:56 Serving insecurely on HTTP port: 9090
	2025/10/19 13:18:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 13:19:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 13:18:56 Starting overwatch
	
	
	==> storage-provisioner [64b3263c4cb9377c973c0405da32ab9f8ae72ae6589d72bc7ad0b1fc5dc41c04] <==
	I1019 13:18:39.021409       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 13:19:09.023017       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [aa8fdf86ae37de45052d5f9afe9fd03316efa20075210f8c3437382ef6fb7292] <==
	W1019 13:19:17.340009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:20.938420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:23.992059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:27.015903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:27.021602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 13:19:27.021810       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 13:19:27.021988       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-455348_bddb783d-ca81-4722-950a-c0956362b63b!
	I1019 13:19:27.022659       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b99ed1ce-9305-43d9-afc4-d6b8159429cd", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-455348_bddb783d-ca81-4722-950a-c0956362b63b became leader
	W1019 13:19:27.027074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:27.032707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 13:19:27.122221       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-455348_bddb783d-ca81-4722-950a-c0956362b63b!
	W1019 13:19:29.035971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:29.045472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:31.049501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:31.059462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:33.068604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:33.077149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:35.085997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:35.098474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:37.102426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:37.107476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:39.115712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:39.150257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:41.165903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 13:19:41.191205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-455348 -n default-k8s-diff-port-455348
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-455348 -n default-k8s-diff-port-455348: exit status 2 (537.086874ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-455348 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-895642 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-895642 --alsologtostderr -v=1: exit status 80 (2.409532733s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-895642 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 13:19:43.102133  505908 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:19:43.102255  505908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:19:43.102260  505908 out.go:374] Setting ErrFile to fd 2...
	I1019 13:19:43.102264  505908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:19:43.110567  505908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:19:43.111878  505908 out.go:368] Setting JSON to false
	I1019 13:19:43.112081  505908 mustload.go:65] Loading cluster: newest-cni-895642
	I1019 13:19:43.112989  505908 config.go:182] Loaded profile config "newest-cni-895642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:19:43.113520  505908 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:43.178070  505908 host.go:66] Checking if "newest-cni-895642" exists ...
	I1019 13:19:43.178366  505908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:19:43.298331  505908 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-19 13:19:43.282257894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:19:43.299092  505908 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-895642 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 13:19:43.303012  505908 out.go:179] * Pausing node newest-cni-895642 ... 
	I1019 13:19:43.306644  505908 host.go:66] Checking if "newest-cni-895642" exists ...
	I1019 13:19:43.306967  505908 ssh_runner.go:195] Run: systemctl --version
	I1019 13:19:43.307005  505908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:43.345616  505908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:43.456734  505908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:19:43.471279  505908 pause.go:52] kubelet running: true
	I1019 13:19:43.471348  505908 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:19:43.785306  505908 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:19:43.785387  505908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:19:43.955431  505908 cri.go:89] found id: "03dbca539bcfc98cd7a3a2ec0eba96e6e563c371d668f62cf7af7e2a2476fb71"
	I1019 13:19:43.955451  505908 cri.go:89] found id: "7d41e598d4b099b52ee82c1ad8784082e78b722e837c80f62909d2860ad4de4f"
	I1019 13:19:43.955456  505908 cri.go:89] found id: "61f12db9b3adb0cf23775bbe9376fe1695c4d2722a25fc54809b48613d48b61f"
	I1019 13:19:43.955461  505908 cri.go:89] found id: "9e6f3db84aecaf5fccfaa84fa11003ed9c1a3adc30985ec057866ca7a90cdc83"
	I1019 13:19:43.955464  505908 cri.go:89] found id: "df7751d1304bdecb2f8c2da9564eb9648edb59cf776486a8eab0e66763b2a99a"
	I1019 13:19:43.955468  505908 cri.go:89] found id: "f23d9dc2b7b73320b039706001020bf4aba009db6c81f31750b64ba7d4b7b791"
	I1019 13:19:43.955471  505908 cri.go:89] found id: ""
	I1019 13:19:43.955523  505908 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:19:43.969988  505908 retry.go:31] will retry after 220.863139ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:19:43Z" level=error msg="open /run/runc: no such file or directory"
	I1019 13:19:44.191470  505908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:19:44.206092  505908 pause.go:52] kubelet running: false
	I1019 13:19:44.206168  505908 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:19:44.564470  505908 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:19:44.564546  505908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:19:44.725539  505908 cri.go:89] found id: "03dbca539bcfc98cd7a3a2ec0eba96e6e563c371d668f62cf7af7e2a2476fb71"
	I1019 13:19:44.725561  505908 cri.go:89] found id: "7d41e598d4b099b52ee82c1ad8784082e78b722e837c80f62909d2860ad4de4f"
	I1019 13:19:44.725566  505908 cri.go:89] found id: "61f12db9b3adb0cf23775bbe9376fe1695c4d2722a25fc54809b48613d48b61f"
	I1019 13:19:44.725570  505908 cri.go:89] found id: "9e6f3db84aecaf5fccfaa84fa11003ed9c1a3adc30985ec057866ca7a90cdc83"
	I1019 13:19:44.725574  505908 cri.go:89] found id: "df7751d1304bdecb2f8c2da9564eb9648edb59cf776486a8eab0e66763b2a99a"
	I1019 13:19:44.725577  505908 cri.go:89] found id: "f23d9dc2b7b73320b039706001020bf4aba009db6c81f31750b64ba7d4b7b791"
	I1019 13:19:44.725580  505908 cri.go:89] found id: ""
	I1019 13:19:44.725630  505908 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:19:44.738930  505908 retry.go:31] will retry after 267.481929ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:19:44Z" level=error msg="open /run/runc: no such file or directory"
	I1019 13:19:45.006873  505908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 13:19:45.061426  505908 pause.go:52] kubelet running: false
	I1019 13:19:45.061563  505908 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 13:19:45.242573  505908 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 13:19:45.242691  505908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 13:19:45.344747  505908 cri.go:89] found id: "03dbca539bcfc98cd7a3a2ec0eba96e6e563c371d668f62cf7af7e2a2476fb71"
	I1019 13:19:45.344818  505908 cri.go:89] found id: "7d41e598d4b099b52ee82c1ad8784082e78b722e837c80f62909d2860ad4de4f"
	I1019 13:19:45.344841  505908 cri.go:89] found id: "61f12db9b3adb0cf23775bbe9376fe1695c4d2722a25fc54809b48613d48b61f"
	I1019 13:19:45.344864  505908 cri.go:89] found id: "9e6f3db84aecaf5fccfaa84fa11003ed9c1a3adc30985ec057866ca7a90cdc83"
	I1019 13:19:45.344899  505908 cri.go:89] found id: "df7751d1304bdecb2f8c2da9564eb9648edb59cf776486a8eab0e66763b2a99a"
	I1019 13:19:45.344934  505908 cri.go:89] found id: "f23d9dc2b7b73320b039706001020bf4aba009db6c81f31750b64ba7d4b7b791"
	I1019 13:19:45.344956  505908 cri.go:89] found id: ""
	I1019 13:19:45.345059  505908 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 13:19:45.364579  505908 out.go:203] 
	W1019 13:19:45.367465  505908 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:19:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:19:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 13:19:45.367489  505908 out.go:285] * 
	* 
	W1019 13:19:45.375230  505908 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 13:19:45.378327  505908 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-895642 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-895642
helpers_test.go:243: (dbg) docker inspect newest-cni-895642:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91",
	        "Created": "2025-10-19T13:18:47.102094751Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 503310,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T13:19:25.443920872Z",
	            "FinishedAt": "2025-10-19T13:19:24.497493043Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91/hostname",
	        "HostsPath": "/var/lib/docker/containers/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91/hosts",
	        "LogPath": "/var/lib/docker/containers/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91-json.log",
	        "Name": "/newest-cni-895642",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-895642:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-895642",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91",
	                "LowerDir": "/var/lib/docker/overlay2/78a263d1d7086b8fb12930f09e9fe63d30f6fc9948d021e88738800232e60a99-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/78a263d1d7086b8fb12930f09e9fe63d30f6fc9948d021e88738800232e60a99/merged",
	                "UpperDir": "/var/lib/docker/overlay2/78a263d1d7086b8fb12930f09e9fe63d30f6fc9948d021e88738800232e60a99/diff",
	                "WorkDir": "/var/lib/docker/overlay2/78a263d1d7086b8fb12930f09e9fe63d30f6fc9948d021e88738800232e60a99/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-895642",
	                "Source": "/var/lib/docker/volumes/newest-cni-895642/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-895642",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-895642",
	                "name.minikube.sigs.k8s.io": "newest-cni-895642",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5012b598f1e8d96e43ae860f77e82c0022d6e03d0f75e58f1fb8f72461ef29eb",
	            "SandboxKey": "/var/run/docker/netns/5012b598f1e8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-895642": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:b0:94:94:6f:23",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "584dee223ade6b07d2b96f7183f8063e011ff006f776b87c19f6da2971cc4a7f",
	                    "EndpointID": "f5ba3363136a01c3e13cd7a75ad963a1d6a3516490498877c88639573baff14f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-895642",
	                        "caf0cfe00265"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895642 -n newest-cni-895642
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895642 -n newest-cni-895642: exit status 2 (440.193307ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-895642 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-895642 logs -n 25: (1.428215552s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-108149                                                                                                                                                                                                                          │ no-preload-108149            │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ delete  │ -p disable-driver-mounts-418719                                                                                                                                                                                                               │ disable-driver-mounts-418719 │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:16 UTC │
	│ start   │ -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-834340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │                     │
	│ stop    │ -p embed-certs-834340 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-834340 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:17 UTC │
	│ start   │ -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-455348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-455348 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-455348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ start   │ -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:19 UTC │
	│ image   │ embed-certs-834340 image list --format=json                                                                                                                                                                                                   │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ pause   │ -p embed-certs-834340 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │                     │
	│ delete  │ -p embed-certs-834340                                                                                                                                                                                                                         │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ delete  │ -p embed-certs-834340                                                                                                                                                                                                                         │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ start   │ -p newest-cni-895642 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-895642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │                     │
	│ stop    │ -p newest-cni-895642 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │ 19 Oct 25 13:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-895642 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │ 19 Oct 25 13:19 UTC │
	│ start   │ -p newest-cni-895642 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │ 19 Oct 25 13:19 UTC │
	│ image   │ default-k8s-diff-port-455348 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │ 19 Oct 25 13:19 UTC │
	│ pause   │ -p default-k8s-diff-port-455348 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │                     │
	│ image   │ newest-cni-895642 image list --format=json                                                                                                                                                                                                    │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │ 19 Oct 25 13:19 UTC │
	│ pause   │ -p newest-cni-895642 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-455348                                                                                                                                                                                                               │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 13:19:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 13:19:25.169345  503186 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:19:25.169574  503186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:19:25.169605  503186 out.go:374] Setting ErrFile to fd 2...
	I1019 13:19:25.169626  503186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:19:25.169968  503186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:19:25.170447  503186 out.go:368] Setting JSON to false
	I1019 13:19:25.171637  503186 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10916,"bootTime":1760869050,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 13:19:25.171742  503186 start.go:141] virtualization:  
	I1019 13:19:25.174991  503186 out.go:179] * [newest-cni-895642] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 13:19:25.179084  503186 notify.go:220] Checking for updates...
	I1019 13:19:25.180010  503186 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:19:25.183043  503186 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:19:25.186046  503186 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:19:25.189047  503186 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 13:19:25.192100  503186 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 13:19:25.195015  503186 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:19:25.198292  503186 config.go:182] Loaded profile config "newest-cni-895642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:19:25.198883  503186 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:19:25.226453  503186 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 13:19:25.226605  503186 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:19:25.295086  503186 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:19:25.278659503 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:19:25.295192  503186 docker.go:318] overlay module found
	I1019 13:19:25.298224  503186 out.go:179] * Using the docker driver based on existing profile
	I1019 13:19:25.301721  503186 start.go:305] selected driver: docker
	I1019 13:19:25.301740  503186 start.go:925] validating driver "docker" against &{Name:newest-cni-895642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:19:25.301843  503186 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:19:25.302559  503186 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:19:25.357591  503186 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:19:25.348074344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:19:25.357979  503186 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 13:19:25.358017  503186 cni.go:84] Creating CNI manager for ""
	I1019 13:19:25.358086  503186 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:19:25.358136  503186 start.go:349] cluster config:
	{Name:newest-cni-895642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:19:25.361457  503186 out.go:179] * Starting "newest-cni-895642" primary control-plane node in "newest-cni-895642" cluster
	I1019 13:19:25.364282  503186 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 13:19:25.367200  503186 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 13:19:25.370018  503186 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:19:25.370106  503186 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 13:19:25.370116  503186 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 13:19:25.370138  503186 cache.go:58] Caching tarball of preloaded images
	I1019 13:19:25.370227  503186 preload.go:233] Found /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 13:19:25.370241  503186 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 13:19:25.370363  503186 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/config.json ...
	I1019 13:19:25.389423  503186 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 13:19:25.389445  503186 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 13:19:25.389464  503186 cache.go:232] Successfully downloaded all kic artifacts
	I1019 13:19:25.389487  503186 start.go:360] acquireMachinesLock for newest-cni-895642: {Name:mke5c4230882c7c86983f0da461147450e8e886d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:19:25.389556  503186 start.go:364] duration metric: took 46.253µs to acquireMachinesLock for "newest-cni-895642"
	I1019 13:19:25.389579  503186 start.go:96] Skipping create...Using existing machine configuration
	I1019 13:19:25.389586  503186 fix.go:54] fixHost starting: 
	I1019 13:19:25.389918  503186 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:25.406454  503186 fix.go:112] recreateIfNeeded on newest-cni-895642: state=Stopped err=<nil>
	W1019 13:19:25.406489  503186 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 13:19:25.409740  503186 out.go:252] * Restarting existing docker container for "newest-cni-895642" ...
	I1019 13:19:25.409823  503186 cli_runner.go:164] Run: docker start newest-cni-895642
	I1019 13:19:25.672026  503186 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:25.703460  503186 kic.go:430] container "newest-cni-895642" state is running.
	I1019 13:19:25.704103  503186 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895642
	I1019 13:19:25.727547  503186 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/config.json ...
	I1019 13:19:25.727779  503186 machine.go:93] provisionDockerMachine start ...
	I1019 13:19:25.727860  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:25.756624  503186 main.go:141] libmachine: Using SSH client type: native
	I1019 13:19:25.757520  503186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1019 13:19:25.757541  503186 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 13:19:25.758657  503186 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1019 13:19:28.913337  503186 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-895642
	
	I1019 13:19:28.913369  503186 ubuntu.go:182] provisioning hostname "newest-cni-895642"
	I1019 13:19:28.913434  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:28.933553  503186 main.go:141] libmachine: Using SSH client type: native
	I1019 13:19:28.934046  503186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1019 13:19:28.934066  503186 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-895642 && echo "newest-cni-895642" | sudo tee /etc/hostname
	I1019 13:19:29.099311  503186 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-895642
	
	I1019 13:19:29.099432  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:29.119806  503186 main.go:141] libmachine: Using SSH client type: native
	I1019 13:19:29.120136  503186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1019 13:19:29.120158  503186 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-895642' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-895642/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-895642' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 13:19:29.277884  503186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 13:19:29.277914  503186 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-292654/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-292654/.minikube}
	I1019 13:19:29.277946  503186 ubuntu.go:190] setting up certificates
	I1019 13:19:29.277961  503186 provision.go:84] configureAuth start
	I1019 13:19:29.278034  503186 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895642
	I1019 13:19:29.301808  503186 provision.go:143] copyHostCerts
	I1019 13:19:29.301873  503186 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem, removing ...
	I1019 13:19:29.301892  503186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem
	I1019 13:19:29.301967  503186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/ca.pem (1082 bytes)
	I1019 13:19:29.302085  503186 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem, removing ...
	I1019 13:19:29.302090  503186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem
	I1019 13:19:29.302117  503186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/cert.pem (1123 bytes)
	I1019 13:19:29.302199  503186 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem, removing ...
	I1019 13:19:29.302205  503186 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem
	I1019 13:19:29.302233  503186 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-292654/.minikube/key.pem (1679 bytes)
	I1019 13:19:29.302290  503186 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem org=jenkins.newest-cni-895642 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-895642]
	I1019 13:19:29.374167  503186 provision.go:177] copyRemoteCerts
	I1019 13:19:29.374259  503186 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 13:19:29.374318  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:29.391140  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:29.493830  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 13:19:29.514390  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 13:19:29.532916  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 13:19:29.550713  503186 provision.go:87] duration metric: took 272.732509ms to configureAuth
	I1019 13:19:29.550741  503186 ubuntu.go:206] setting minikube options for container-runtime
	I1019 13:19:29.550946  503186 config.go:182] Loaded profile config "newest-cni-895642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:19:29.551070  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:29.569921  503186 main.go:141] libmachine: Using SSH client type: native
	I1019 13:19:29.570253  503186 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1780 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1019 13:19:29.570274  503186 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 13:19:29.873306  503186 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 13:19:29.873332  503186 machine.go:96] duration metric: took 4.145535815s to provisionDockerMachine
	I1019 13:19:29.873352  503186 start.go:293] postStartSetup for "newest-cni-895642" (driver="docker")
	I1019 13:19:29.873364  503186 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 13:19:29.873444  503186 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 13:19:29.873490  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:29.890148  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:29.997258  503186 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 13:19:30.002593  503186 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 13:19:30.002644  503186 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 13:19:30.002659  503186 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/addons for local assets ...
	I1019 13:19:30.002738  503186 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-292654/.minikube/files for local assets ...
	I1019 13:19:30.002829  503186 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem -> 2945182.pem in /etc/ssl/certs
	I1019 13:19:30.002936  503186 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 13:19:30.029850  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:19:30.066991  503186 start.go:296] duration metric: took 193.620206ms for postStartSetup
	I1019 13:19:30.067112  503186 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 13:19:30.067248  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:30.088223  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:30.191529  503186 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 13:19:30.196369  503186 fix.go:56] duration metric: took 4.806775977s for fixHost
	I1019 13:19:30.196395  503186 start.go:83] releasing machines lock for "newest-cni-895642", held for 4.806827736s
	I1019 13:19:30.196471  503186 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895642
	I1019 13:19:30.214998  503186 ssh_runner.go:195] Run: cat /version.json
	I1019 13:19:30.215056  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:30.215139  503186 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 13:19:30.215199  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:30.240016  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:30.241564  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:30.346049  503186 ssh_runner.go:195] Run: systemctl --version
	I1019 13:19:30.440967  503186 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 13:19:30.476294  503186 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 13:19:30.480767  503186 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 13:19:30.480880  503186 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 13:19:30.488567  503186 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 13:19:30.488602  503186 start.go:495] detecting cgroup driver to use...
	I1019 13:19:30.488634  503186 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1019 13:19:30.488699  503186 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 13:19:30.504768  503186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 13:19:30.517613  503186 docker.go:218] disabling cri-docker service (if available) ...
	I1019 13:19:30.517744  503186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 13:19:30.534697  503186 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 13:19:30.547999  503186 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 13:19:30.666826  503186 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 13:19:30.789564  503186 docker.go:234] disabling docker service ...
	I1019 13:19:30.789718  503186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 13:19:30.805667  503186 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 13:19:30.827277  503186 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 13:19:30.950983  503186 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 13:19:31.080274  503186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 13:19:31.095662  503186 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 13:19:31.111621  503186 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 13:19:31.111694  503186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.122130  503186 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 13:19:31.122227  503186 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.132706  503186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.142968  503186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.152846  503186 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 13:19:31.161851  503186 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.171479  503186 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.180553  503186 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 13:19:31.190292  503186 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 13:19:31.198459  503186 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 13:19:31.205996  503186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:19:31.330350  503186 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 13:19:31.465643  503186 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 13:19:31.465758  503186 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 13:19:31.469621  503186 start.go:563] Will wait 60s for crictl version
	I1019 13:19:31.469847  503186 ssh_runner.go:195] Run: which crictl
	I1019 13:19:31.473844  503186 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 13:19:31.498952  503186 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 13:19:31.499106  503186 ssh_runner.go:195] Run: crio --version
	I1019 13:19:31.528942  503186 ssh_runner.go:195] Run: crio --version
	I1019 13:19:31.561590  503186 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 13:19:31.564368  503186 cli_runner.go:164] Run: docker network inspect newest-cni-895642 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 13:19:31.581115  503186 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 13:19:31.584948  503186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:19:31.597969  503186 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1019 13:19:31.600776  503186 kubeadm.go:883] updating cluster {Name:newest-cni-895642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 13:19:31.600918  503186 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:19:31.601013  503186 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:19:31.635369  503186 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:19:31.635393  503186 crio.go:433] Images already preloaded, skipping extraction
	I1019 13:19:31.635446  503186 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 13:19:31.662206  503186 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 13:19:31.662228  503186 cache_images.go:85] Images are preloaded, skipping loading
	I1019 13:19:31.662251  503186 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1019 13:19:31.662400  503186 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-895642 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 13:19:31.662503  503186 ssh_runner.go:195] Run: crio config
	I1019 13:19:31.740719  503186 cni.go:84] Creating CNI manager for ""
	I1019 13:19:31.740744  503186 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:19:31.740791  503186 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1019 13:19:31.740824  503186 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-895642 NodeName:newest-cni-895642 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 13:19:31.740960  503186 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-895642"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 13:19:31.741033  503186 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 13:19:31.749166  503186 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 13:19:31.749265  503186 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 13:19:31.757972  503186 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 13:19:31.772492  503186 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 13:19:31.785933  503186 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1019 13:19:31.799868  503186 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 13:19:31.803838  503186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 13:19:31.813960  503186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:19:31.927165  503186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:19:31.943424  503186 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642 for IP: 192.168.85.2
	I1019 13:19:31.943448  503186 certs.go:195] generating shared ca certs ...
	I1019 13:19:31.943464  503186 certs.go:227] acquiring lock for ca certs: {Name:mk8f2f1c683cf5104ef70f6f3d59bf8f6240d633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:19:31.943596  503186 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key
	I1019 13:19:31.943651  503186 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key
	I1019 13:19:31.943663  503186 certs.go:257] generating profile certs ...
	I1019 13:19:31.943751  503186 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/client.key
	I1019 13:19:31.943815  503186 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.key.d4125fb8
	I1019 13:19:31.943857  503186 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/proxy-client.key
	I1019 13:19:31.943986  503186 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem (1338 bytes)
	W1019 13:19:31.944020  503186 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518_empty.pem, impossibly tiny 0 bytes
	I1019 13:19:31.944033  503186 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca-key.pem (1679 bytes)
	I1019 13:19:31.944067  503186 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/ca.pem (1082 bytes)
	I1019 13:19:31.944096  503186 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/cert.pem (1123 bytes)
	I1019 13:19:31.944123  503186 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/certs/key.pem (1679 bytes)
	I1019 13:19:31.944168  503186 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem (1708 bytes)
	I1019 13:19:31.944773  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 13:19:31.968205  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 13:19:31.988543  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 13:19:32.011799  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1019 13:19:32.034572  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 13:19:32.058346  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 13:19:32.081243  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 13:19:32.107781  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/newest-cni-895642/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 13:19:32.139561  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/ssl/certs/2945182.pem --> /usr/share/ca-certificates/2945182.pem (1708 bytes)
	I1019 13:19:32.167240  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 13:19:32.190650  503186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-292654/.minikube/certs/294518.pem --> /usr/share/ca-certificates/294518.pem (1338 bytes)
	I1019 13:19:32.210444  503186 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 13:19:32.225354  503186 ssh_runner.go:195] Run: openssl version
	I1019 13:19:32.231708  503186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 13:19:32.240811  503186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:19:32.244713  503186 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:14 /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:19:32.244786  503186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 13:19:32.294059  503186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 13:19:32.302549  503186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294518.pem && ln -fs /usr/share/ca-certificates/294518.pem /etc/ssl/certs/294518.pem"
	I1019 13:19:32.312139  503186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294518.pem
	I1019 13:19:32.315693  503186 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:20 /usr/share/ca-certificates/294518.pem
	I1019 13:19:32.315781  503186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294518.pem
	I1019 13:19:32.356958  503186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294518.pem /etc/ssl/certs/51391683.0"
	I1019 13:19:32.365153  503186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2945182.pem && ln -fs /usr/share/ca-certificates/2945182.pem /etc/ssl/certs/2945182.pem"
	I1019 13:19:32.373578  503186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2945182.pem
	I1019 13:19:32.377296  503186 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:20 /usr/share/ca-certificates/2945182.pem
	I1019 13:19:32.377391  503186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2945182.pem
	I1019 13:19:32.421016  503186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2945182.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 13:19:32.429783  503186 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 13:19:32.433636  503186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 13:19:32.475552  503186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 13:19:32.516702  503186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 13:19:32.557815  503186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 13:19:32.600480  503186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 13:19:32.650589  503186 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 13:19:32.714948  503186 kubeadm.go:400] StartCluster: {Name:newest-cni-895642 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-895642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 13:19:32.715039  503186 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 13:19:32.715138  503186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 13:19:32.785785  503186 cri.go:89] found id: "df7751d1304bdecb2f8c2da9564eb9648edb59cf776486a8eab0e66763b2a99a"
	I1019 13:19:32.785808  503186 cri.go:89] found id: ""
	I1019 13:19:32.785895  503186 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 13:19:32.812260  503186 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T13:19:32Z" level=error msg="open /run/runc: no such file or directory"
	I1019 13:19:32.812389  503186 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 13:19:32.834629  503186 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 13:19:32.834659  503186 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 13:19:32.834752  503186 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 13:19:32.860259  503186 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 13:19:32.860861  503186 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-895642" does not appear in /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:19:32.861329  503186 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-292654/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-895642" cluster setting kubeconfig missing "newest-cni-895642" context setting]
	I1019 13:19:32.862441  503186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:19:32.866956  503186 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 13:19:32.894204  503186 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 13:19:32.894240  503186 kubeadm.go:601] duration metric: took 59.575017ms to restartPrimaryControlPlane
	I1019 13:19:32.894250  503186 kubeadm.go:402] duration metric: took 179.312154ms to StartCluster
	I1019 13:19:32.894265  503186 settings.go:142] acquiring lock: {Name:mk1099ab6cbf86eca031b5f8e2b43952c9c0f84f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:19:32.894332  503186 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:19:32.895329  503186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/kubeconfig: {Name:mk73f840b7aff0d0c482ab3ce736e39ca7b2eabe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:19:32.895543  503186 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:19:32.895955  503186 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 13:19:32.896047  503186 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-895642"
	I1019 13:19:32.896069  503186 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-895642"
	W1019 13:19:32.896075  503186 addons.go:247] addon storage-provisioner should already be in state true
	I1019 13:19:32.896106  503186 host.go:66] Checking if "newest-cni-895642" exists ...
	I1019 13:19:32.896714  503186 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:32.896881  503186 config.go:182] Loaded profile config "newest-cni-895642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:19:32.896933  503186 addons.go:69] Setting dashboard=true in profile "newest-cni-895642"
	I1019 13:19:32.896944  503186 addons.go:238] Setting addon dashboard=true in "newest-cni-895642"
	W1019 13:19:32.896950  503186 addons.go:247] addon dashboard should already be in state true
	I1019 13:19:32.896967  503186 host.go:66] Checking if "newest-cni-895642" exists ...
	I1019 13:19:32.897398  503186 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:32.898253  503186 addons.go:69] Setting default-storageclass=true in profile "newest-cni-895642"
	I1019 13:19:32.898321  503186 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-895642"
	I1019 13:19:32.898658  503186 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:32.899976  503186 out.go:179] * Verifying Kubernetes components...
	I1019 13:19:32.903268  503186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 13:19:32.955871  503186 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 13:19:32.958895  503186 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:19:32.958923  503186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 13:19:32.958989  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:32.959467  503186 addons.go:238] Setting addon default-storageclass=true in "newest-cni-895642"
	W1019 13:19:32.959482  503186 addons.go:247] addon default-storageclass should already be in state true
	I1019 13:19:32.959506  503186 host.go:66] Checking if "newest-cni-895642" exists ...
	I1019 13:19:32.959935  503186 cli_runner.go:164] Run: docker container inspect newest-cni-895642 --format={{.State.Status}}
	I1019 13:19:32.977658  503186 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 13:19:32.982573  503186 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 13:19:32.985533  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 13:19:32.985564  503186 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 13:19:32.985636  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:33.020700  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:33.045991  503186 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 13:19:33.046014  503186 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 13:19:33.046082  503186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895642
	I1019 13:19:33.055558  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:33.097899  503186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/newest-cni-895642/id_rsa Username:docker}
	I1019 13:19:33.300589  503186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 13:19:33.323301  503186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 13:19:33.338409  503186 api_server.go:52] waiting for apiserver process to appear ...
	I1019 13:19:33.338537  503186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 13:19:33.373599  503186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 13:19:33.399805  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 13:19:33.399871  503186 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 13:19:33.446692  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 13:19:33.446720  503186 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 13:19:33.535860  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 13:19:33.535897  503186 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 13:19:33.598982  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 13:19:33.599016  503186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 13:19:33.632171  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 13:19:33.632199  503186 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 13:19:33.651270  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 13:19:33.651296  503186 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 13:19:33.664849  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 13:19:33.664876  503186 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 13:19:33.678940  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 13:19:33.678961  503186 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 13:19:33.692305  503186 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 13:19:33.692329  503186 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 13:19:33.711585  503186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 13:19:41.907320  503186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.583937077s)
	I1019 13:19:41.907383  503186 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (8.568816755s)
	I1019 13:19:41.907396  503186 api_server.go:72] duration metric: took 9.011819253s to wait for apiserver process to appear ...
	I1019 13:19:41.907401  503186 api_server.go:88] waiting for apiserver healthz status ...
	I1019 13:19:41.907419  503186 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 13:19:41.907728  503186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.534060743s)
	I1019 13:19:41.908045  503186 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.196430771s)
	I1019 13:19:41.911576  503186 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-895642 addons enable metrics-server
	
	I1019 13:19:41.940098  503186 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1019 13:19:41.941451  503186 api_server.go:141] control plane version: v1.34.1
	I1019 13:19:41.941474  503186 api_server.go:131] duration metric: took 34.06651ms to wait for apiserver health ...
	I1019 13:19:41.941484  503186 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 13:19:41.960414  503186 system_pods.go:59] 8 kube-system pods found
	I1019 13:19:41.960504  503186 system_pods.go:61] "coredns-66bc5c9577-gbtfz" [5f13f614-c060-4f18-90ea-149a9ddd78c3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 13:19:41.960529  503186 system_pods.go:61] "etcd-newest-cni-895642" [ddf46703-f963-42c8-b02e-db35d858825b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 13:19:41.960566  503186 system_pods.go:61] "kindnet-wtcgs" [348e9181-c940-4d5f-b47a-562fbdd88f99] Running
	I1019 13:19:41.960596  503186 system_pods.go:61] "kube-apiserver-newest-cni-895642" [320e873e-5b32-42b4-ab87-be63b052dd3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 13:19:41.960619  503186 system_pods.go:61] "kube-controller-manager-newest-cni-895642" [eb67514c-1127-4953-aaa0-e0b02b9a5c38] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 13:19:41.960656  503186 system_pods.go:61] "kube-proxy-f8v8j" [4ce496c6-376a-47a7-adb5-90a20dfe8e09] Running
	I1019 13:19:41.960685  503186 system_pods.go:61] "kube-scheduler-newest-cni-895642" [981bd088-aa41-45c6-8263-995758c40371] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 13:19:41.960707  503186 system_pods.go:61] "storage-provisioner" [67bebe62-06cb-4eca-916e-d2799b856c75] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 13:19:41.960747  503186 system_pods.go:74] duration metric: took 19.251832ms to wait for pod list to return data ...
	I1019 13:19:41.960777  503186 default_sa.go:34] waiting for default service account to be created ...
	I1019 13:19:41.968088  503186 default_sa.go:45] found service account: "default"
	I1019 13:19:41.968161  503186 default_sa.go:55] duration metric: took 7.361668ms for default service account to be created ...
	I1019 13:19:41.968189  503186 kubeadm.go:586] duration metric: took 9.072610296s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 13:19:41.968237  503186 node_conditions.go:102] verifying NodePressure condition ...
	I1019 13:19:41.973737  503186 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1019 13:19:41.976250  503186 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1019 13:19:41.976280  503186 node_conditions.go:123] node cpu capacity is 2
	I1019 13:19:41.976293  503186 node_conditions.go:105] duration metric: took 8.03193ms to run NodePressure ...
	I1019 13:19:41.976306  503186 start.go:241] waiting for startup goroutines ...
	I1019 13:19:41.976900  503186 addons.go:514] duration metric: took 9.080944826s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1019 13:19:41.976973  503186 start.go:246] waiting for cluster config update ...
	I1019 13:19:41.977000  503186 start.go:255] writing updated cluster config ...
	I1019 13:19:41.977345  503186 ssh_runner.go:195] Run: rm -f paused
	I1019 13:19:42.085431  503186 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1019 13:19:42.090605  503186 out.go:179] * Done! kubectl is now configured to use "newest-cni-895642" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.423387719Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.433049456Z" level=info msg="Running pod sandbox: kube-system/kindnet-wtcgs/POD" id=a5912e30-a44c-4496-b432-7cb41ce41d9d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.433115549Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.455707128Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a5912e30-a44c-4496-b432-7cb41ce41d9d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.464577633Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=9096262c-0090-4e2d-90b9-07eecb3c3de0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.492148504Z" level=info msg="Ran pod sandbox d99569949a19cf138684231c223d00dec4e4cfae0b7c19910cba5516180714f2 with infra container: kube-system/kindnet-wtcgs/POD" id=a5912e30-a44c-4496-b432-7cb41ce41d9d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.493304778Z" level=info msg="Ran pod sandbox 55bfc587a836b32faaa32b9e78224e32b45b5fe5d1bbe51bc6deeadcd6703548 with infra container: kube-system/kube-proxy-f8v8j/POD" id=9096262c-0090-4e2d-90b9-07eecb3c3de0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.516094834Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b78d9b8c-0ebb-472a-b2f7-f6bdd2711efb name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.518925847Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=50530d34-26d2-46ef-9bb6-22776e9053d4 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.520779928Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=053154df-4c1d-409c-a0cf-8c4dc2b6de00 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.528607071Z" level=info msg="Creating container: kube-system/kube-proxy-f8v8j/kube-proxy" id=95dce8e4-a5ed-4a01-87c5-6c1a83e8dc6b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.528911797Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.530505722Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c424c103-a65d-4d39-bd5c-160f1885d423 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.552532026Z" level=info msg="Creating container: kube-system/kindnet-wtcgs/kindnet-cni" id=f9de50df-9b47-45b2-a8b1-9855a74c583e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.562727972Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.59207413Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.594257257Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.622059016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.629644252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.768934461Z" level=info msg="Created container 03dbca539bcfc98cd7a3a2ec0eba96e6e563c371d668f62cf7af7e2a2476fb71: kube-system/kindnet-wtcgs/kindnet-cni" id=f9de50df-9b47-45b2-a8b1-9855a74c583e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.770852551Z" level=info msg="Starting container: 03dbca539bcfc98cd7a3a2ec0eba96e6e563c371d668f62cf7af7e2a2476fb71" id=1c6bc06e-9e87-4249-b5aa-12fd77b53f19 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.776022025Z" level=info msg="Started container" PID=1066 containerID=03dbca539bcfc98cd7a3a2ec0eba96e6e563c371d668f62cf7af7e2a2476fb71 description=kube-system/kindnet-wtcgs/kindnet-cni id=1c6bc06e-9e87-4249-b5aa-12fd77b53f19 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d99569949a19cf138684231c223d00dec4e4cfae0b7c19910cba5516180714f2
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.821495263Z" level=info msg="Created container 7d41e598d4b099b52ee82c1ad8784082e78b722e837c80f62909d2860ad4de4f: kube-system/kube-proxy-f8v8j/kube-proxy" id=95dce8e4-a5ed-4a01-87c5-6c1a83e8dc6b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.833894604Z" level=info msg="Starting container: 7d41e598d4b099b52ee82c1ad8784082e78b722e837c80f62909d2860ad4de4f" id=48630224-e5c8-4986-86ca-4e45424d61ff name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.846865932Z" level=info msg="Started container" PID=1063 containerID=7d41e598d4b099b52ee82c1ad8784082e78b722e837c80f62909d2860ad4de4f description=kube-system/kube-proxy-f8v8j/kube-proxy id=48630224-e5c8-4986-86ca-4e45424d61ff name=/runtime.v1.RuntimeService/StartContainer sandboxID=55bfc587a836b32faaa32b9e78224e32b45b5fe5d1bbe51bc6deeadcd6703548
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	03dbca539bcfc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   d99569949a19c       kindnet-wtcgs                               kube-system
	7d41e598d4b09       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   55bfc587a836b       kube-proxy-f8v8j                            kube-system
	61f12db9b3adb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago      Running             kube-controller-manager   1                   c19716d1d8511       kube-controller-manager-newest-cni-895642   kube-system
	9e6f3db84aeca       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      1                   263c2c1c47425       etcd-newest-cni-895642                      kube-system
	df7751d1304bd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            1                   f165f40aac3fe       kube-apiserver-newest-cni-895642            kube-system
	f23d9dc2b7b73       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            1                   b322c911b466c       kube-scheduler-newest-cni-895642            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-895642
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-895642
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=newest-cni-895642
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T13_19_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 13:19:11 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-895642
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 13:19:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 13:19:39 +0000   Sun, 19 Oct 2025 13:19:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 13:19:39 +0000   Sun, 19 Oct 2025 13:19:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 13:19:39 +0000   Sun, 19 Oct 2025 13:19:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 19 Oct 2025 13:19:39 +0000   Sun, 19 Oct 2025 13:19:07 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-895642
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                df9d6668-401f-4ce8-aa0c-269b36d9790d
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-895642                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-wtcgs                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-895642             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-895642    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-f8v8j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-895642             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  40s (x8 over 40s)  kubelet          Node newest-cni-895642 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet          Node newest-cni-895642 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     40s (x8 over 40s)  kubelet          Node newest-cni-895642 status is now: NodeHasSufficientPID
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-895642 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     32s                kubelet          Node newest-cni-895642 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node newest-cni-895642 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           28s                node-controller  Node newest-cni-895642 event: Registered Node newest-cni-895642 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14s (x8 over 14s)  kubelet          Node newest-cni-895642 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet          Node newest-cni-895642 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x8 over 14s)  kubelet          Node newest-cni-895642 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-895642 event: Registered Node newest-cni-895642 in Controller
	
	
	==> dmesg <==
	[ +11.914063] overlayfs: idmapped layers are currently not supported
	[Oct19 12:57] overlayfs: idmapped layers are currently not supported
	[Oct19 12:58] overlayfs: idmapped layers are currently not supported
	[ +48.481184] overlayfs: idmapped layers are currently not supported
	[Oct19 12:59] overlayfs: idmapped layers are currently not supported
	[Oct19 13:00] overlayfs: idmapped layers are currently not supported
	[Oct19 13:01] overlayfs: idmapped layers are currently not supported
	[Oct19 13:04] overlayfs: idmapped layers are currently not supported
	[Oct19 13:05] overlayfs: idmapped layers are currently not supported
	[Oct19 13:06] overlayfs: idmapped layers are currently not supported
	[Oct19 13:08] overlayfs: idmapped layers are currently not supported
	[ +38.759554] overlayfs: idmapped layers are currently not supported
	[Oct19 13:10] overlayfs: idmapped layers are currently not supported
	[Oct19 13:11] overlayfs: idmapped layers are currently not supported
	[Oct19 13:12] overlayfs: idmapped layers are currently not supported
	[ +39.991818] overlayfs: idmapped layers are currently not supported
	[Oct19 13:13] overlayfs: idmapped layers are currently not supported
	[Oct19 13:14] overlayfs: idmapped layers are currently not supported
	[Oct19 13:15] overlayfs: idmapped layers are currently not supported
	[ +34.413925] overlayfs: idmapped layers are currently not supported
	[Oct19 13:17] overlayfs: idmapped layers are currently not supported
	[ +27.716246] overlayfs: idmapped layers are currently not supported
	[Oct19 13:18] overlayfs: idmapped layers are currently not supported
	[Oct19 13:19] overlayfs: idmapped layers are currently not supported
	[ +25.562956] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9e6f3db84aecaf5fccfaa84fa11003ed9c1a3adc30985ec057866ca7a90cdc83] <==
	{"level":"warn","ts":"2025-10-19T13:19:37.571095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.599732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.640054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.674796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.731202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.775110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.803367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.823958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.889358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.911147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.941917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.981151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.006292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.057987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.083598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.109434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.143934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.161493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.191513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.217128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.252800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.276890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.310852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.326145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.408221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60240","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:19:46 up  3:02,  0 user,  load average: 6.07, 4.14, 3.17
	Linux newest-cni-895642 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [03dbca539bcfc98cd7a3a2ec0eba96e6e563c371d668f62cf7af7e2a2476fb71] <==
	I1019 13:19:40.914081       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:19:40.914536       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 13:19:40.914651       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:19:40.914662       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:19:40.914676       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:19:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:19:41.131895       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:19:41.131940       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:19:41.131950       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:19:41.133536       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [df7751d1304bdecb2f8c2da9564eb9648edb59cf776486a8eab0e66763b2a99a] <==
	I1019 13:19:39.756962       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 13:19:39.767948       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:19:39.771111       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 13:19:39.814027       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1019 13:19:39.772980       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 13:19:39.814075       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 13:19:39.814327       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 13:19:39.814469       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 13:19:39.878599       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 13:19:39.882107       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 13:19:39.893107       1 cache.go:39] Caches are synced for autoregister controller
	I1019 13:19:39.895289       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 13:19:39.895608       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1019 13:19:39.970865       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 13:19:40.198567       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 13:19:40.207352       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 13:19:41.295294       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 13:19:41.498678       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 13:19:41.557858       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 13:19:41.600544       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 13:19:41.699188       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.129.124"}
	I1019 13:19:41.732142       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.127.136"}
	I1019 13:19:44.018254       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 13:19:44.311600       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 13:19:44.483965       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [61f12db9b3adb0cf23775bbe9376fe1695c4d2722a25fc54809b48613d48b61f] <==
	I1019 13:19:43.994265       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 13:19:43.994342       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 13:19:43.996251       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 13:19:43.997361       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 13:19:43.997765       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 13:19:43.999138       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 13:19:44.000669       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 13:19:44.000850       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 13:19:44.001055       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 13:19:44.001114       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-895642"
	I1019 13:19:44.001664       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 13:19:44.005342       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 13:19:44.005662       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 13:19:44.008084       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 13:19:44.012057       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 13:19:44.012333       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 13:19:44.013848       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:19:44.021275       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 13:19:44.022918       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 13:19:44.027521       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 13:19:44.028616       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 13:19:44.034295       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:19:44.035441       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 13:19:44.042834       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 13:19:44.045171       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [7d41e598d4b099b52ee82c1ad8784082e78b722e837c80f62909d2860ad4de4f] <==
	I1019 13:19:41.196981       1 server_linux.go:53] "Using iptables proxy"
	I1019 13:19:41.448361       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 13:19:41.548664       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 13:19:41.548709       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 13:19:41.548773       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 13:19:41.812591       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:19:41.812760       1 server_linux.go:132] "Using iptables Proxier"
	I1019 13:19:41.859121       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 13:19:41.859579       1 server.go:527] "Version info" version="v1.34.1"
	I1019 13:19:41.879471       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:19:41.886188       1 config.go:200] "Starting service config controller"
	I1019 13:19:41.886208       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 13:19:41.886226       1 config.go:106] "Starting endpoint slice config controller"
	I1019 13:19:41.886231       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 13:19:41.886241       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 13:19:41.886245       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 13:19:41.905529       1 config.go:309] "Starting node config controller"
	I1019 13:19:41.965890       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 13:19:41.965997       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 13:19:41.988979       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 13:19:41.989104       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 13:19:42.087201       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f23d9dc2b7b73320b039706001020bf4aba009db6c81f31750b64ba7d4b7b791] <==
	I1019 13:19:35.238688       1 serving.go:386] Generated self-signed cert in-memory
	W1019 13:19:39.482143       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 13:19:39.482170       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 13:19:39.482180       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 13:19:39.482187       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 13:19:39.671518       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 13:19:39.671734       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:19:39.690085       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:19:39.698126       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:19:39.698022       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 13:19:39.698078       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1019 13:19:39.758607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 13:19:39.786493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 13:19:39.786832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 13:19:39.786923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 13:19:39.787020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1019 13:19:39.800632       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1019 13:19:39.853569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 13:19:39.870652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	
	
	==> kubelet <==
	Oct 19 13:19:39 newest-cni-895642 kubelet[732]: I1019 13:19:39.497926     732 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-895642"
	Oct 19 13:19:39 newest-cni-895642 kubelet[732]: I1019 13:19:39.981200     732 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-895642"
	Oct 19 13:19:39 newest-cni-895642 kubelet[732]: I1019 13:19:39.981310     732 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-895642"
	Oct 19 13:19:39 newest-cni-895642 kubelet[732]: I1019 13:19:39.981348     732 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 19 13:19:39 newest-cni-895642 kubelet[732]: I1019 13:19:39.982313     732 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: E1019 13:19:40.020757     732 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-895642\" already exists" pod="kube-system/etcd-newest-cni-895642"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.020814     732 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-895642"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.069027     732 apiserver.go:52] "Watching apiserver"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: E1019 13:19:40.122131     732 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-895642\" already exists" pod="kube-system/kube-apiserver-newest-cni-895642"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.122172     732 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-895642"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: E1019 13:19:40.163657     732 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-895642\" already exists" pod="kube-system/kube-controller-manager-newest-cni-895642"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.163698     732 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-895642"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.173579     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/348e9181-c940-4d5f-b47a-562fbdd88f99-cni-cfg\") pod \"kindnet-wtcgs\" (UID: \"348e9181-c940-4d5f-b47a-562fbdd88f99\") " pod="kube-system/kindnet-wtcgs"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.173647     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/348e9181-c940-4d5f-b47a-562fbdd88f99-lib-modules\") pod \"kindnet-wtcgs\" (UID: \"348e9181-c940-4d5f-b47a-562fbdd88f99\") " pod="kube-system/kindnet-wtcgs"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.173785     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/348e9181-c940-4d5f-b47a-562fbdd88f99-xtables-lock\") pod \"kindnet-wtcgs\" (UID: \"348e9181-c940-4d5f-b47a-562fbdd88f99\") " pod="kube-system/kindnet-wtcgs"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.200694     732 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: E1019 13:19:40.246016     732 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-895642\" already exists" pod="kube-system/kube-scheduler-newest-cni-895642"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.246401     732 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.279197     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ce496c6-376a-47a7-adb5-90a20dfe8e09-lib-modules\") pod \"kube-proxy-f8v8j\" (UID: \"4ce496c6-376a-47a7-adb5-90a20dfe8e09\") " pod="kube-system/kube-proxy-f8v8j"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.279475     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ce496c6-376a-47a7-adb5-90a20dfe8e09-xtables-lock\") pod \"kube-proxy-f8v8j\" (UID: \"4ce496c6-376a-47a7-adb5-90a20dfe8e09\") " pod="kube-system/kube-proxy-f8v8j"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: W1019 13:19:40.474377     732 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91/crio-55bfc587a836b32faaa32b9e78224e32b45b5fe5d1bbe51bc6deeadcd6703548 WatchSource:0}: Error finding container 55bfc587a836b32faaa32b9e78224e32b45b5fe5d1bbe51bc6deeadcd6703548: Status 404 returned error can't find the container with id 55bfc587a836b32faaa32b9e78224e32b45b5fe5d1bbe51bc6deeadcd6703548
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: W1019 13:19:40.474904     732 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91/crio-d99569949a19cf138684231c223d00dec4e4cfae0b7c19910cba5516180714f2 WatchSource:0}: Error finding container d99569949a19cf138684231c223d00dec4e4cfae0b7c19910cba5516180714f2: Status 404 returned error can't find the container with id d99569949a19cf138684231c223d00dec4e4cfae0b7c19910cba5516180714f2
	Oct 19 13:19:43 newest-cni-895642 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 13:19:43 newest-cni-895642 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 13:19:43 newest-cni-895642 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895642 -n newest-cni-895642
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895642 -n newest-cni-895642: exit status 2 (457.660771ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-895642 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-gbtfz storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rqxlb kubernetes-dashboard-855c9754f9-7nxm5
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-895642 describe pod coredns-66bc5c9577-gbtfz storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rqxlb kubernetes-dashboard-855c9754f9-7nxm5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-895642 describe pod coredns-66bc5c9577-gbtfz storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rqxlb kubernetes-dashboard-855c9754f9-7nxm5: exit status 1 (132.82019ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-gbtfz" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-rqxlb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-7nxm5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-895642 describe pod coredns-66bc5c9577-gbtfz storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rqxlb kubernetes-dashboard-855c9754f9-7nxm5: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-895642
helpers_test.go:243: (dbg) docker inspect newest-cni-895642:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91",
	        "Created": "2025-10-19T13:18:47.102094751Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 503310,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T13:19:25.443920872Z",
	            "FinishedAt": "2025-10-19T13:19:24.497493043Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91/hostname",
	        "HostsPath": "/var/lib/docker/containers/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91/hosts",
	        "LogPath": "/var/lib/docker/containers/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91-json.log",
	        "Name": "/newest-cni-895642",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-895642:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-895642",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91",
	                "LowerDir": "/var/lib/docker/overlay2/78a263d1d7086b8fb12930f09e9fe63d30f6fc9948d021e88738800232e60a99-init/diff:/var/lib/docker/overlay2/22253622c2894832d30b813afe567f7b9ecf7984773aa56376172cfea7d51bfc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/78a263d1d7086b8fb12930f09e9fe63d30f6fc9948d021e88738800232e60a99/merged",
	                "UpperDir": "/var/lib/docker/overlay2/78a263d1d7086b8fb12930f09e9fe63d30f6fc9948d021e88738800232e60a99/diff",
	                "WorkDir": "/var/lib/docker/overlay2/78a263d1d7086b8fb12930f09e9fe63d30f6fc9948d021e88738800232e60a99/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-895642",
	                "Source": "/var/lib/docker/volumes/newest-cni-895642/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-895642",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-895642",
	                "name.minikube.sigs.k8s.io": "newest-cni-895642",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5012b598f1e8d96e43ae860f77e82c0022d6e03d0f75e58f1fb8f72461ef29eb",
	            "SandboxKey": "/var/run/docker/netns/5012b598f1e8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-895642": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:b0:94:94:6f:23",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "584dee223ade6b07d2b96f7183f8063e011ff006f776b87c19f6da2971cc4a7f",
	                    "EndpointID": "f5ba3363136a01c3e13cd7a75ad963a1d6a3516490498877c88639573baff14f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-895642",
	                        "caf0cfe00265"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895642 -n newest-cni-895642
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895642 -n newest-cni-895642: exit status 2 (526.856459ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-895642 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-895642 logs -n 25: (1.274836396s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:16 UTC │ 19 Oct 25 13:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-834340 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │                     │
	│ stop    │ -p embed-certs-834340 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-834340 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:17 UTC │
	│ start   │ -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:17 UTC │ 19 Oct 25 13:18 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-455348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-455348 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-455348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ start   │ -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:19 UTC │
	│ image   │ embed-certs-834340 image list --format=json                                                                                                                                                                                                   │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ pause   │ -p embed-certs-834340 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │                     │
	│ delete  │ -p embed-certs-834340                                                                                                                                                                                                                         │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ delete  │ -p embed-certs-834340                                                                                                                                                                                                                         │ embed-certs-834340           │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:18 UTC │
	│ start   │ -p newest-cni-895642 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:18 UTC │ 19 Oct 25 13:19 UTC │
	│ addons  │ enable metrics-server -p newest-cni-895642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │                     │
	│ stop    │ -p newest-cni-895642 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │ 19 Oct 25 13:19 UTC │
	│ addons  │ enable dashboard -p newest-cni-895642 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │ 19 Oct 25 13:19 UTC │
	│ start   │ -p newest-cni-895642 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │ 19 Oct 25 13:19 UTC │
	│ image   │ default-k8s-diff-port-455348 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │ 19 Oct 25 13:19 UTC │
	│ pause   │ -p default-k8s-diff-port-455348 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │                     │
	│ image   │ newest-cni-895642 image list --format=json                                                                                                                                                                                                    │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │ 19 Oct 25 13:19 UTC │
	│ pause   │ -p newest-cni-895642 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-895642            │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-455348                                                                                                                                                                                                               │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │ 19 Oct 25 13:19 UTC │
	│ delete  │ -p default-k8s-diff-port-455348                                                                                                                                                                                                               │ default-k8s-diff-port-455348 │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │ 19 Oct 25 13:19 UTC │
	│ start   │ -p auto-696007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-696007                  │ jenkins │ v1.37.0 │ 19 Oct 25 13:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 13:19:47
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 13:19:47.260714  506805 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:19:47.261392  506805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:19:47.261403  506805 out.go:374] Setting ErrFile to fd 2...
	I1019 13:19:47.261409  506805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:19:47.261738  506805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:19:47.262192  506805 out.go:368] Setting JSON to false
	I1019 13:19:47.263103  506805 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10938,"bootTime":1760869050,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 13:19:47.263165  506805 start.go:141] virtualization:  
	I1019 13:19:47.267010  506805 out.go:179] * [auto-696007] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 13:19:47.270575  506805 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:19:47.270752  506805 notify.go:220] Checking for updates...
	I1019 13:19:47.276424  506805 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:19:47.279369  506805 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:19:47.282525  506805 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 13:19:47.285423  506805 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 13:19:47.288303  506805 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:19:47.291849  506805 config.go:182] Loaded profile config "newest-cni-895642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:19:47.291950  506805 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:19:47.347540  506805 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 13:19:47.347649  506805 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:19:47.460844  506805 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:19:47.448615369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:19:47.460949  506805 docker.go:318] overlay module found
	I1019 13:19:47.464992  506805 out.go:179] * Using the docker driver based on user configuration
	I1019 13:19:47.468087  506805 start.go:305] selected driver: docker
	I1019 13:19:47.468113  506805 start.go:925] validating driver "docker" against <nil>
	I1019 13:19:47.468127  506805 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:19:47.468852  506805 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:19:47.570049  506805 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:19:47.553870135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:19:47.570216  506805 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 13:19:47.570459  506805 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 13:19:47.573417  506805 out.go:179] * Using Docker driver with root privileges
	I1019 13:19:47.576309  506805 cni.go:84] Creating CNI manager for ""
	I1019 13:19:47.576381  506805 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 13:19:47.576396  506805 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 13:19:47.576474  506805 start.go:349] cluster config:
	{Name:auto-696007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-696007 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1019 13:19:47.579609  506805 out.go:179] * Starting "auto-696007" primary control-plane node in "auto-696007" cluster
	I1019 13:19:47.582529  506805 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 13:19:47.585578  506805 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 13:19:47.588471  506805 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 13:19:47.588539  506805 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1019 13:19:47.588557  506805 cache.go:58] Caching tarball of preloaded images
	I1019 13:19:47.588561  506805 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 13:19:47.588639  506805 preload.go:233] Found /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1019 13:19:47.588649  506805 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 13:19:47.588769  506805 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/auto-696007/config.json ...
	I1019 13:19:47.588786  506805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/auto-696007/config.json: {Name:mk27883ada0185b0d8b77bcf0277bf979fb8d546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 13:19:47.609530  506805 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 13:19:47.609552  506805 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 13:19:47.609566  506805 cache.go:232] Successfully downloaded all kic artifacts
	I1019 13:19:47.609588  506805 start.go:360] acquireMachinesLock for auto-696007: {Name:mkb00c276ee825f78f1f0367d6f4ac0a378ebae9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 13:19:47.609728  506805 start.go:364] duration metric: took 116.835µs to acquireMachinesLock for "auto-696007"
	I1019 13:19:47.609760  506805 start.go:93] Provisioning new machine with config: &{Name:auto-696007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-696007 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 13:19:47.609831  506805 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.423387719Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.433049456Z" level=info msg="Running pod sandbox: kube-system/kindnet-wtcgs/POD" id=a5912e30-a44c-4496-b432-7cb41ce41d9d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.433115549Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.455707128Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a5912e30-a44c-4496-b432-7cb41ce41d9d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.464577633Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=9096262c-0090-4e2d-90b9-07eecb3c3de0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.492148504Z" level=info msg="Ran pod sandbox d99569949a19cf138684231c223d00dec4e4cfae0b7c19910cba5516180714f2 with infra container: kube-system/kindnet-wtcgs/POD" id=a5912e30-a44c-4496-b432-7cb41ce41d9d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.493304778Z" level=info msg="Ran pod sandbox 55bfc587a836b32faaa32b9e78224e32b45b5fe5d1bbe51bc6deeadcd6703548 with infra container: kube-system/kube-proxy-f8v8j/POD" id=9096262c-0090-4e2d-90b9-07eecb3c3de0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.516094834Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b78d9b8c-0ebb-472a-b2f7-f6bdd2711efb name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.518925847Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=50530d34-26d2-46ef-9bb6-22776e9053d4 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.520779928Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=053154df-4c1d-409c-a0cf-8c4dc2b6de00 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.528607071Z" level=info msg="Creating container: kube-system/kube-proxy-f8v8j/kube-proxy" id=95dce8e4-a5ed-4a01-87c5-6c1a83e8dc6b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.528911797Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.530505722Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c424c103-a65d-4d39-bd5c-160f1885d423 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.552532026Z" level=info msg="Creating container: kube-system/kindnet-wtcgs/kindnet-cni" id=f9de50df-9b47-45b2-a8b1-9855a74c583e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.562727972Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.59207413Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.594257257Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.622059016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.629644252Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.768934461Z" level=info msg="Created container 03dbca539bcfc98cd7a3a2ec0eba96e6e563c371d668f62cf7af7e2a2476fb71: kube-system/kindnet-wtcgs/kindnet-cni" id=f9de50df-9b47-45b2-a8b1-9855a74c583e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.770852551Z" level=info msg="Starting container: 03dbca539bcfc98cd7a3a2ec0eba96e6e563c371d668f62cf7af7e2a2476fb71" id=1c6bc06e-9e87-4249-b5aa-12fd77b53f19 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.776022025Z" level=info msg="Started container" PID=1066 containerID=03dbca539bcfc98cd7a3a2ec0eba96e6e563c371d668f62cf7af7e2a2476fb71 description=kube-system/kindnet-wtcgs/kindnet-cni id=1c6bc06e-9e87-4249-b5aa-12fd77b53f19 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d99569949a19cf138684231c223d00dec4e4cfae0b7c19910cba5516180714f2
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.821495263Z" level=info msg="Created container 7d41e598d4b099b52ee82c1ad8784082e78b722e837c80f62909d2860ad4de4f: kube-system/kube-proxy-f8v8j/kube-proxy" id=95dce8e4-a5ed-4a01-87c5-6c1a83e8dc6b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.833894604Z" level=info msg="Starting container: 7d41e598d4b099b52ee82c1ad8784082e78b722e837c80f62909d2860ad4de4f" id=48630224-e5c8-4986-86ca-4e45424d61ff name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 13:19:40 newest-cni-895642 crio[611]: time="2025-10-19T13:19:40.846865932Z" level=info msg="Started container" PID=1063 containerID=7d41e598d4b099b52ee82c1ad8784082e78b722e837c80f62909d2860ad4de4f description=kube-system/kube-proxy-f8v8j/kube-proxy id=48630224-e5c8-4986-86ca-4e45424d61ff name=/runtime.v1.RuntimeService/StartContainer sandboxID=55bfc587a836b32faaa32b9e78224e32b45b5fe5d1bbe51bc6deeadcd6703548
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	03dbca539bcfc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   8 seconds ago       Running             kindnet-cni               1                   d99569949a19c       kindnet-wtcgs                               kube-system
	7d41e598d4b09       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   8 seconds ago       Running             kube-proxy                1                   55bfc587a836b       kube-proxy-f8v8j                            kube-system
	61f12db9b3adb       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   16 seconds ago      Running             kube-controller-manager   1                   c19716d1d8511       kube-controller-manager-newest-cni-895642   kube-system
	9e6f3db84aeca       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   16 seconds ago      Running             etcd                      1                   263c2c1c47425       etcd-newest-cni-895642                      kube-system
	df7751d1304bd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   16 seconds ago      Running             kube-apiserver            1                   f165f40aac3fe       kube-apiserver-newest-cni-895642            kube-system
	f23d9dc2b7b73       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   16 seconds ago      Running             kube-scheduler            1                   b322c911b466c       kube-scheduler-newest-cni-895642            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-895642
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-895642
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=newest-cni-895642
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T13_19_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 13:19:11 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-895642
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 13:19:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 13:19:39 +0000   Sun, 19 Oct 2025 13:19:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 13:19:39 +0000   Sun, 19 Oct 2025 13:19:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 13:19:39 +0000   Sun, 19 Oct 2025 13:19:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 19 Oct 2025 13:19:39 +0000   Sun, 19 Oct 2025 13:19:07 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-895642
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                df9d6668-401f-4ce8-aa0c-269b36d9790d
	  Boot ID:                    02276678-c9d0-4308-9474-c920f9bcefa8
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-895642                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         37s
	  kube-system                 kindnet-wtcgs                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-newest-cni-895642             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-895642    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-f8v8j                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-newest-cni-895642             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29s                kube-proxy       
	  Normal   Starting                 7s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node newest-cni-895642 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node newest-cni-895642 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     43s (x8 over 43s)  kubelet          Node newest-cni-895642 status is now: NodeHasSufficientPID
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node newest-cni-895642 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node newest-cni-895642 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node newest-cni-895642 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           31s                node-controller  Node newest-cni-895642 event: Registered Node newest-cni-895642 in Controller
	  Normal   Starting                 17s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17s (x8 over 17s)  kubelet          Node newest-cni-895642 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17s (x8 over 17s)  kubelet          Node newest-cni-895642 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17s (x8 over 17s)  kubelet          Node newest-cni-895642 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-895642 event: Registered Node newest-cni-895642 in Controller
	
	
	==> dmesg <==
	[ +11.914063] overlayfs: idmapped layers are currently not supported
	[Oct19 12:57] overlayfs: idmapped layers are currently not supported
	[Oct19 12:58] overlayfs: idmapped layers are currently not supported
	[ +48.481184] overlayfs: idmapped layers are currently not supported
	[Oct19 12:59] overlayfs: idmapped layers are currently not supported
	[Oct19 13:00] overlayfs: idmapped layers are currently not supported
	[Oct19 13:01] overlayfs: idmapped layers are currently not supported
	[Oct19 13:04] overlayfs: idmapped layers are currently not supported
	[Oct19 13:05] overlayfs: idmapped layers are currently not supported
	[Oct19 13:06] overlayfs: idmapped layers are currently not supported
	[Oct19 13:08] overlayfs: idmapped layers are currently not supported
	[ +38.759554] overlayfs: idmapped layers are currently not supported
	[Oct19 13:10] overlayfs: idmapped layers are currently not supported
	[Oct19 13:11] overlayfs: idmapped layers are currently not supported
	[Oct19 13:12] overlayfs: idmapped layers are currently not supported
	[ +39.991818] overlayfs: idmapped layers are currently not supported
	[Oct19 13:13] overlayfs: idmapped layers are currently not supported
	[Oct19 13:14] overlayfs: idmapped layers are currently not supported
	[Oct19 13:15] overlayfs: idmapped layers are currently not supported
	[ +34.413925] overlayfs: idmapped layers are currently not supported
	[Oct19 13:17] overlayfs: idmapped layers are currently not supported
	[ +27.716246] overlayfs: idmapped layers are currently not supported
	[Oct19 13:18] overlayfs: idmapped layers are currently not supported
	[Oct19 13:19] overlayfs: idmapped layers are currently not supported
	[ +25.562956] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9e6f3db84aecaf5fccfaa84fa11003ed9c1a3adc30985ec057866ca7a90cdc83] <==
	{"level":"warn","ts":"2025-10-19T13:19:37.571095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.599732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.640054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.674796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.731202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.775110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.803367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.823958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.889358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.911147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.941917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:37.981151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.006292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.057987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.083598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.109434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.143934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.161493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.191513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.217128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.252800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.276890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.310852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.326145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T13:19:38.408221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60240","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:19:49 up  3:02,  0 user,  load average: 5.74, 4.10, 3.16
	Linux newest-cni-895642 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [03dbca539bcfc98cd7a3a2ec0eba96e6e563c371d668f62cf7af7e2a2476fb71] <==
	I1019 13:19:40.914081       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 13:19:40.914536       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 13:19:40.914651       1 main.go:148] setting mtu 1500 for CNI 
	I1019 13:19:40.914662       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 13:19:40.914676       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T13:19:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 13:19:41.131895       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 13:19:41.131940       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 13:19:41.131950       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 13:19:41.133536       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [df7751d1304bdecb2f8c2da9564eb9648edb59cf776486a8eab0e66763b2a99a] <==
	I1019 13:19:39.756962       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 13:19:39.767948       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 13:19:39.771111       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 13:19:39.814027       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1019 13:19:39.772980       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 13:19:39.814075       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 13:19:39.814327       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 13:19:39.814469       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 13:19:39.878599       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 13:19:39.882107       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 13:19:39.893107       1 cache.go:39] Caches are synced for autoregister controller
	I1019 13:19:39.895289       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 13:19:39.895608       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1019 13:19:39.970865       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 13:19:40.198567       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 13:19:40.207352       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 13:19:41.295294       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 13:19:41.498678       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 13:19:41.557858       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 13:19:41.600544       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 13:19:41.699188       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.129.124"}
	I1019 13:19:41.732142       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.127.136"}
	I1019 13:19:44.018254       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 13:19:44.311600       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 13:19:44.483965       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [61f12db9b3adb0cf23775bbe9376fe1695c4d2722a25fc54809b48613d48b61f] <==
	I1019 13:19:43.994265       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 13:19:43.994342       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 13:19:43.996251       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 13:19:43.997361       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 13:19:43.997765       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 13:19:43.999138       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 13:19:44.000669       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 13:19:44.000850       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 13:19:44.001055       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 13:19:44.001114       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-895642"
	I1019 13:19:44.001664       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 13:19:44.005342       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 13:19:44.005662       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 13:19:44.008084       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 13:19:44.012057       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 13:19:44.012333       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 13:19:44.013848       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 13:19:44.021275       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 13:19:44.022918       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 13:19:44.027521       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 13:19:44.028616       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 13:19:44.034295       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 13:19:44.035441       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 13:19:44.042834       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 13:19:44.045171       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [7d41e598d4b099b52ee82c1ad8784082e78b722e837c80f62909d2860ad4de4f] <==
	I1019 13:19:41.196981       1 server_linux.go:53] "Using iptables proxy"
	I1019 13:19:41.448361       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 13:19:41.548664       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 13:19:41.548709       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 13:19:41.548773       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 13:19:41.812591       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 13:19:41.812760       1 server_linux.go:132] "Using iptables Proxier"
	I1019 13:19:41.859121       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 13:19:41.859579       1 server.go:527] "Version info" version="v1.34.1"
	I1019 13:19:41.879471       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:19:41.886188       1 config.go:200] "Starting service config controller"
	I1019 13:19:41.886208       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 13:19:41.886226       1 config.go:106] "Starting endpoint slice config controller"
	I1019 13:19:41.886231       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 13:19:41.886241       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 13:19:41.886245       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 13:19:41.905529       1 config.go:309] "Starting node config controller"
	I1019 13:19:41.965890       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 13:19:41.965997       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 13:19:41.988979       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 13:19:41.989104       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 13:19:42.087201       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f23d9dc2b7b73320b039706001020bf4aba009db6c81f31750b64ba7d4b7b791] <==
	I1019 13:19:35.238688       1 serving.go:386] Generated self-signed cert in-memory
	W1019 13:19:39.482143       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 13:19:39.482170       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 13:19:39.482180       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 13:19:39.482187       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 13:19:39.671518       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 13:19:39.671734       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 13:19:39.690085       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:19:39.698126       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 13:19:39.698022       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 13:19:39.698078       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1019 13:19:39.758607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 13:19:39.786493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 13:19:39.786832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 13:19:39.786923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 13:19:39.787020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1019 13:19:39.800632       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1019 13:19:39.853569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 13:19:39.870652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	
	
	==> kubelet <==
	Oct 19 13:19:39 newest-cni-895642 kubelet[732]: I1019 13:19:39.497926     732 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-895642"
	Oct 19 13:19:39 newest-cni-895642 kubelet[732]: I1019 13:19:39.981200     732 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-895642"
	Oct 19 13:19:39 newest-cni-895642 kubelet[732]: I1019 13:19:39.981310     732 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-895642"
	Oct 19 13:19:39 newest-cni-895642 kubelet[732]: I1019 13:19:39.981348     732 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 19 13:19:39 newest-cni-895642 kubelet[732]: I1019 13:19:39.982313     732 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: E1019 13:19:40.020757     732 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-895642\" already exists" pod="kube-system/etcd-newest-cni-895642"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.020814     732 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-895642"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.069027     732 apiserver.go:52] "Watching apiserver"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: E1019 13:19:40.122131     732 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-895642\" already exists" pod="kube-system/kube-apiserver-newest-cni-895642"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.122172     732 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-895642"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: E1019 13:19:40.163657     732 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-895642\" already exists" pod="kube-system/kube-controller-manager-newest-cni-895642"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.163698     732 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-895642"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.173579     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/348e9181-c940-4d5f-b47a-562fbdd88f99-cni-cfg\") pod \"kindnet-wtcgs\" (UID: \"348e9181-c940-4d5f-b47a-562fbdd88f99\") " pod="kube-system/kindnet-wtcgs"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.173647     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/348e9181-c940-4d5f-b47a-562fbdd88f99-lib-modules\") pod \"kindnet-wtcgs\" (UID: \"348e9181-c940-4d5f-b47a-562fbdd88f99\") " pod="kube-system/kindnet-wtcgs"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.173785     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/348e9181-c940-4d5f-b47a-562fbdd88f99-xtables-lock\") pod \"kindnet-wtcgs\" (UID: \"348e9181-c940-4d5f-b47a-562fbdd88f99\") " pod="kube-system/kindnet-wtcgs"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.200694     732 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: E1019 13:19:40.246016     732 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-895642\" already exists" pod="kube-system/kube-scheduler-newest-cni-895642"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.246401     732 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.279197     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ce496c6-376a-47a7-adb5-90a20dfe8e09-lib-modules\") pod \"kube-proxy-f8v8j\" (UID: \"4ce496c6-376a-47a7-adb5-90a20dfe8e09\") " pod="kube-system/kube-proxy-f8v8j"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: I1019 13:19:40.279475     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ce496c6-376a-47a7-adb5-90a20dfe8e09-xtables-lock\") pod \"kube-proxy-f8v8j\" (UID: \"4ce496c6-376a-47a7-adb5-90a20dfe8e09\") " pod="kube-system/kube-proxy-f8v8j"
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: W1019 13:19:40.474377     732 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91/crio-55bfc587a836b32faaa32b9e78224e32b45b5fe5d1bbe51bc6deeadcd6703548 WatchSource:0}: Error finding container 55bfc587a836b32faaa32b9e78224e32b45b5fe5d1bbe51bc6deeadcd6703548: Status 404 returned error can't find the container with id 55bfc587a836b32faaa32b9e78224e32b45b5fe5d1bbe51bc6deeadcd6703548
	Oct 19 13:19:40 newest-cni-895642 kubelet[732]: W1019 13:19:40.474904     732 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/caf0cfe002654debf4474233e9faa44789760736c491ec22e76a69f8919dba91/crio-d99569949a19cf138684231c223d00dec4e4cfae0b7c19910cba5516180714f2 WatchSource:0}: Error finding container d99569949a19cf138684231c223d00dec4e4cfae0b7c19910cba5516180714f2: Status 404 returned error can't find the container with id d99569949a19cf138684231c223d00dec4e4cfae0b7c19910cba5516180714f2
	Oct 19 13:19:43 newest-cni-895642 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 13:19:43 newest-cni-895642 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 13:19:43 newest-cni-895642 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895642 -n newest-cni-895642
E1019 13:19:50.322458  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:19:50.328743  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:19:50.341072  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-895642 -n newest-cni-895642: exit status 2 (421.679479ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-895642 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E1019 13:19:50.364021  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:19:50.405482  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:280: non-running pods: coredns-66bc5c9577-gbtfz storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rqxlb kubernetes-dashboard-855c9754f9-7nxm5
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-895642 describe pod coredns-66bc5c9577-gbtfz storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rqxlb kubernetes-dashboard-855c9754f9-7nxm5
E1019 13:19:50.486748  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-895642 describe pod coredns-66bc5c9577-gbtfz storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rqxlb kubernetes-dashboard-855c9754f9-7nxm5: exit status 1 (97.902871ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-gbtfz" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-rqxlb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-7nxm5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-895642 describe pod coredns-66bc5c9577-gbtfz storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rqxlb kubernetes-dashboard-855c9754f9-7nxm5: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.57s)
E1019 13:26:17.066453  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/auto-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:17.072791  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/auto-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:17.084131  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/auto-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:17.105534  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/auto-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:17.146917  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/auto-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:17.228450  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/auto-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:17.390218  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/auto-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:17.711601  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/auto-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:18.353459  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/auto-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:19.635361  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/auto-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:20.238008  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kindnet-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:20.244284  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kindnet-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:20.255590  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kindnet-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:20.276891  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kindnet-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:20.318129  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kindnet-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:20.399701  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kindnet-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:20.561282  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kindnet-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:20.883379  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kindnet-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:21.525195  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kindnet-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:22.197241  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/auto-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:22.806770  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kindnet-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:25.368772  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kindnet-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:27.318553  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/auto-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:30.490998  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kindnet-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:37.560676  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/auto-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:40.733480  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kindnet-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:45.933141  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:26:58.042952  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/auto-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:27:01.214873  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kindnet-696007/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (260/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 9.56
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.7
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.1
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 182.76
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 9.84
48 TestAddons/StoppedEnableDisable 12.51
49 TestCertOptions 34.23
50 TestCertExpiration 235.26
52 TestForceSystemdFlag 39.58
53 TestForceSystemdEnv 38.86
59 TestErrorSpam/setup 32.58
60 TestErrorSpam/start 0.78
61 TestErrorSpam/status 1.16
62 TestErrorSpam/pause 6.76
63 TestErrorSpam/unpause 5.27
64 TestErrorSpam/stop 1.53
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 82.04
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 27.94
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.63
76 TestFunctional/serial/CacheCmd/cache/add_local 1.14
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.84
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 36.08
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.47
87 TestFunctional/serial/LogsFileCmd 1.51
88 TestFunctional/serial/InvalidService 4.48
90 TestFunctional/parallel/ConfigCmd 0.46
91 TestFunctional/parallel/DashboardCmd 11.35
92 TestFunctional/parallel/DryRun 0.55
93 TestFunctional/parallel/InternationalLanguage 0.22
94 TestFunctional/parallel/StatusCmd 1.09
99 TestFunctional/parallel/AddonsCmd 0.22
100 TestFunctional/parallel/PersistentVolumeClaim 27.58
102 TestFunctional/parallel/SSHCmd 0.71
103 TestFunctional/parallel/CpCmd 2.07
105 TestFunctional/parallel/FileSync 0.38
106 TestFunctional/parallel/CertSync 2.25
110 TestFunctional/parallel/NodeLabels 0.1
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.7
114 TestFunctional/parallel/License 0.38
115 TestFunctional/parallel/Version/short 0.09
116 TestFunctional/parallel/Version/components 1.29
117 TestFunctional/parallel/ImageCommands/ImageListShort 1.66
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
121 TestFunctional/parallel/ImageCommands/ImageBuild 4.53
122 TestFunctional/parallel/ImageCommands/Setup 0.68
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.72
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.42
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
146 TestFunctional/parallel/ProfileCmd/profile_list 0.43
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
148 TestFunctional/parallel/MountCmd/any-port 8.08
149 TestFunctional/parallel/MountCmd/specific-port 2.02
150 TestFunctional/parallel/MountCmd/VerifyCleanup 2.19
151 TestFunctional/parallel/ServiceCmd/List 0.58
152 TestFunctional/parallel/ServiceCmd/JSONOutput 0.63
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 187.58
164 TestMultiControlPlane/serial/DeployApp 6.59
165 TestMultiControlPlane/serial/PingHostFromPods 1.55
166 TestMultiControlPlane/serial/AddWorkerNode 59.79
167 TestMultiControlPlane/serial/NodeLabels 0.13
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.02
169 TestMultiControlPlane/serial/CopyFile 20.26
170 TestMultiControlPlane/serial/StopSecondaryNode 12.85
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
172 TestMultiControlPlane/serial/RestartSecondaryNode 30.32
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.47
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 134.92
175 TestMultiControlPlane/serial/DeleteSecondaryNode 12.04
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
177 TestMultiControlPlane/serial/StopCluster 36.05
178 TestMultiControlPlane/serial/RestartCluster 64.79
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.04
180 TestMultiControlPlane/serial/AddSecondaryNode 80.92
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.1
185 TestJSONOutput/start/Command 82.97
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.83
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 41.38
211 TestKicCustomNetwork/use_default_bridge_network 37.38
212 TestKicExistingNetwork 38.93
213 TestKicCustomSubnet 36.97
214 TestKicStaticIP 36.68
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 74.43
219 TestMountStart/serial/StartWithMountFirst 10.17
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 6.33
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.28
226 TestMountStart/serial/RestartStopped 8.79
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 133.86
231 TestMultiNode/serial/DeployApp2Nodes 4.96
232 TestMultiNode/serial/PingHostFrom2Pods 0.97
233 TestMultiNode/serial/AddNode 59.3
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.81
236 TestMultiNode/serial/CopyFile 10.32
237 TestMultiNode/serial/StopNode 2.4
238 TestMultiNode/serial/StartAfterStop 8.5
239 TestMultiNode/serial/RestartKeepsNodes 75.68
240 TestMultiNode/serial/DeleteNode 5.67
241 TestMultiNode/serial/StopMultiNode 24.04
242 TestMultiNode/serial/RestartMultiNode 54.47
243 TestMultiNode/serial/ValidateNameConflict 38.67
248 TestPreload 127.17
250 TestScheduledStopUnix 115.16
253 TestInsufficientStorage 13.81
254 TestRunningBinaryUpgrade 54.84
256 TestKubernetesUpgrade 351.48
257 TestMissingContainerUpgrade 113.13
259 TestPause/serial/Start 95.28
260 TestPause/serial/SecondStartNoReconfiguration 28.98
262 TestStoppedBinaryUpgrade/Setup 0.7
263 TestStoppedBinaryUpgrade/Upgrade 64.31
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.24
273 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
274 TestNoKubernetes/serial/StartWithK8s 36.63
275 TestNoKubernetes/serial/StartWithStopK8s 12.66
276 TestNoKubernetes/serial/Start 5.61
277 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
278 TestNoKubernetes/serial/ProfileList 1.04
279 TestNoKubernetes/serial/Stop 1.32
280 TestNoKubernetes/serial/StartNoArgs 7.17
281 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
289 TestNetworkPlugins/group/false 3.79
294 TestStartStop/group/old-k8s-version/serial/FirstStart 69.06
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.56
297 TestStartStop/group/no-preload/serial/FirstStart 68.4
299 TestStartStop/group/old-k8s-version/serial/Stop 14.44
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
301 TestStartStop/group/old-k8s-version/serial/SecondStart 61.39
302 TestStartStop/group/no-preload/serial/DeployApp 10.42
304 TestStartStop/group/no-preload/serial/Stop 12.11
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
308 TestStartStop/group/no-preload/serial/SecondStart 61.13
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
312 TestStartStop/group/embed-certs/serial/FirstStart 85.87
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
314 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
315 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.89
319 TestStartStop/group/embed-certs/serial/DeployApp 9.39
321 TestStartStop/group/embed-certs/serial/Stop 12.21
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
323 TestStartStop/group/embed-certs/serial/SecondStart 55.13
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.33
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.01
327 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 61.2
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.15
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
334 TestStartStop/group/newest-cni/serial/FirstStart 40.58
335 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
338 TestStartStop/group/newest-cni/serial/Stop 1.35
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
340 TestStartStop/group/newest-cni/serial/SecondStart 17.54
341 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.15
342 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
344 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
348 TestNetworkPlugins/group/auto/Start 89.28
349 TestNetworkPlugins/group/kindnet/Start 85.43
350 TestNetworkPlugins/group/auto/KubeletFlags 0.31
351 TestNetworkPlugins/group/auto/NetCatPod 10.31
352 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
353 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
354 TestNetworkPlugins/group/kindnet/NetCatPod 9.27
355 TestNetworkPlugins/group/auto/DNS 0.18
356 TestNetworkPlugins/group/auto/Localhost 0.14
357 TestNetworkPlugins/group/auto/HairPin 0.18
358 TestNetworkPlugins/group/kindnet/DNS 0.24
359 TestNetworkPlugins/group/kindnet/Localhost 0.23
360 TestNetworkPlugins/group/kindnet/HairPin 0.17
361 TestNetworkPlugins/group/calico/Start 76.55
362 TestNetworkPlugins/group/custom-flannel/Start 68.91
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.27
366 TestNetworkPlugins/group/calico/KubeletFlags 0.43
367 TestNetworkPlugins/group/calico/NetCatPod 11.34
368 TestNetworkPlugins/group/custom-flannel/DNS 0.17
369 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
370 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
371 TestNetworkPlugins/group/calico/DNS 0.17
372 TestNetworkPlugins/group/calico/Localhost 0.13
373 TestNetworkPlugins/group/calico/HairPin 0.14
374 TestNetworkPlugins/group/enable-default-cni/Start 79.84
375 TestNetworkPlugins/group/flannel/Start 64.91
376 TestNetworkPlugins/group/flannel/ControllerPod 6
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
378 TestNetworkPlugins/group/flannel/NetCatPod 11.26
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.3
381 TestNetworkPlugins/group/flannel/DNS 0.16
382 TestNetworkPlugins/group/flannel/Localhost 0.15
383 TestNetworkPlugins/group/flannel/HairPin 0.13
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
387 TestNetworkPlugins/group/bridge/Start 81.36
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
389 TestNetworkPlugins/group/bridge/NetCatPod 9.24
390 TestNetworkPlugins/group/bridge/DNS 0.16
391 TestNetworkPlugins/group/bridge/Localhost 0.13
392 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (9.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-865961 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-865961 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.561678819s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (9.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1019 12:13:34.957149  294518 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1019 12:13:34.957230  294518 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-865961
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-865961: exit status 85 (101.916405ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-865961 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-865961 │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:13:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:13:25.439776  294523 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:13:25.439957  294523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:13:25.439987  294523 out.go:374] Setting ErrFile to fd 2...
	I1019 12:13:25.440008  294523 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:13:25.440273  294523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	W1019 12:13:25.440440  294523 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21772-292654/.minikube/config/config.json: open /home/jenkins/minikube-integration/21772-292654/.minikube/config/config.json: no such file or directory
	I1019 12:13:25.440860  294523 out.go:368] Setting JSON to true
	I1019 12:13:25.441738  294523 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6956,"bootTime":1760869050,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 12:13:25.441831  294523 start.go:141] virtualization:  
	I1019 12:13:25.445992  294523 out.go:99] [download-only-865961] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1019 12:13:25.446192  294523 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball: no such file or directory
	I1019 12:13:25.446264  294523 notify.go:220] Checking for updates...
	I1019 12:13:25.449254  294523 out.go:171] MINIKUBE_LOCATION=21772
	I1019 12:13:25.452408  294523 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:13:25.455299  294523 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 12:13:25.458143  294523 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 12:13:25.461090  294523 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1019 12:13:25.466863  294523 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1019 12:13:25.467143  294523 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:13:25.504199  294523 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 12:13:25.504325  294523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:13:25.560738  294523 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-19 12:13:25.550987016 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 12:13:25.560856  294523 docker.go:318] overlay module found
	I1019 12:13:25.563714  294523 out.go:99] Using the docker driver based on user configuration
	I1019 12:13:25.563752  294523 start.go:305] selected driver: docker
	I1019 12:13:25.563768  294523 start.go:925] validating driver "docker" against <nil>
	I1019 12:13:25.563886  294523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:13:25.626425  294523 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-19 12:13:25.616715679 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 12:13:25.626614  294523 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 12:13:25.626912  294523 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1019 12:13:25.627066  294523 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 12:13:25.630212  294523 out.go:171] Using Docker driver with root privileges
	I1019 12:13:25.633211  294523 cni.go:84] Creating CNI manager for ""
	I1019 12:13:25.633274  294523 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:13:25.633292  294523 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 12:13:25.633367  294523 start.go:349] cluster config:
	{Name:download-only-865961 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-865961 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:13:25.636273  294523 out.go:99] Starting "download-only-865961" primary control-plane node in "download-only-865961" cluster
	I1019 12:13:25.636298  294523 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:13:25.639111  294523 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:13:25.639140  294523 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 12:13:25.639298  294523 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:13:25.655139  294523 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1019 12:13:25.655320  294523 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1019 12:13:25.655420  294523 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1019 12:13:25.695497  294523 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1019 12:13:25.695524  294523 cache.go:58] Caching tarball of preloaded images
	I1019 12:13:25.696312  294523 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 12:13:25.699432  294523 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1019 12:13:25.699464  294523 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1019 12:13:25.784220  294523 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1019 12:13:25.784382  294523 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1019 12:13:29.259597  294523 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1019 12:13:29.260023  294523 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/download-only-865961/config.json ...
	I1019 12:13:29.260062  294523 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/download-only-865961/config.json: {Name:mk4784661349cb93303a6c3c398192aafffbd13e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:13:29.260271  294523 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 12:13:29.261127  294523 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21772-292654/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-865961 host does not exist
	  To start a cluster, run: "minikube start -p download-only-865961"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-865961
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-900450 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-900450 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.698998778s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1019 12:13:40.125226  294518 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1019 12:13:40.125264  294518 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-292654/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-900450
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-900450: exit status 85 (96.494453ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-865961 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-865961 │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │ 19 Oct 25 12:13 UTC │
	│ delete  │ -p download-only-865961                                                                                                                                                   │ download-only-865961 │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │ 19 Oct 25 12:13 UTC │
	│ start   │ -o=json --download-only -p download-only-900450 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-900450 │ jenkins │ v1.37.0 │ 19 Oct 25 12:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:13:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:13:35.471699  294723 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:13:35.471893  294723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:13:35.471924  294723 out.go:374] Setting ErrFile to fd 2...
	I1019 12:13:35.471948  294723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:13:35.472230  294723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:13:35.472671  294723 out.go:368] Setting JSON to true
	I1019 12:13:35.473518  294723 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6966,"bootTime":1760869050,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 12:13:35.473615  294723 start.go:141] virtualization:  
	I1019 12:13:35.477182  294723 out.go:99] [download-only-900450] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 12:13:35.477497  294723 notify.go:220] Checking for updates...
	I1019 12:13:35.481205  294723 out.go:171] MINIKUBE_LOCATION=21772
	I1019 12:13:35.484707  294723 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:13:35.487586  294723 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 12:13:35.490553  294723 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 12:13:35.493416  294723 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1019 12:13:35.499316  294723 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1019 12:13:35.499610  294723 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:13:35.521555  294723 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 12:13:35.521700  294723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:13:35.597166  294723 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-19 12:13:35.58770294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 12:13:35.597278  294723 docker.go:318] overlay module found
	I1019 12:13:35.600204  294723 out.go:99] Using the docker driver based on user configuration
	I1019 12:13:35.600243  294723 start.go:305] selected driver: docker
	I1019 12:13:35.600257  294723 start.go:925] validating driver "docker" against <nil>
	I1019 12:13:35.600377  294723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:13:35.658487  294723 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-19 12:13:35.649898891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 12:13:35.658652  294723 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 12:13:35.658944  294723 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1019 12:13:35.659110  294723 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 12:13:35.662114  294723 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-900450 host does not exist
	  To start a cluster, run: "minikube start -p download-only-900450"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-900450
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1019 12:13:41.269034  294518 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-974688 --alsologtostderr --binary-mirror http://127.0.0.1:41571 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-974688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-974688
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-694780
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-694780: exit status 85 (72.017203ms)

                                                
                                                
-- stdout --
	* Profile "addons-694780" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-694780"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-694780
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-694780: exit status 85 (77.038887ms)

                                                
                                                
-- stdout --
	* Profile "addons-694780" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-694780"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (182.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-694780 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-694780 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m2.757297346s)
--- PASS: TestAddons/Setup (182.76s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-694780 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-694780 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.84s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-694780 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-694780 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ad68bc25-4243-4208-ae68-a37db2558acc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ad68bc25-4243-4208-ae68-a37db2558acc] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004104393s
addons_test.go:694: (dbg) Run:  kubectl --context addons-694780 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-694780 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-694780 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-694780 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.51s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-694780
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-694780: (12.211071162s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-694780
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-694780
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-694780
--- PASS: TestAddons/StoppedEnableDisable (12.51s)

                                                
                                    
x
+
TestCertOptions (34.23s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-264135 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-264135 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (31.350574659s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-264135 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-264135 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-264135 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-264135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-264135
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-264135: (2.119578483s)
--- PASS: TestCertOptions (34.23s)

                                                
                                    
x
+
TestCertExpiration (235.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-088393 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-088393 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (33.818422663s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-088393 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-088393 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.912690916s)
helpers_test.go:175: Cleaning up "cert-expiration-088393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-088393
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-088393: (2.529752518s)
--- PASS: TestCertExpiration (235.26s)

                                                
                                    
x
+
TestForceSystemdFlag (39.58s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-606072 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1019 13:11:29.005280  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:11:45.933003  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:11:46.966529  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-606072 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.749131363s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-606072 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-606072" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-606072
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-606072: (2.500120894s)
--- PASS: TestForceSystemdFlag (39.58s)

                                                
                                    
x
+
TestForceSystemdEnv (38.86s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-821686 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-821686 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.277213612s)
helpers_test.go:175: Cleaning up "force-systemd-env-821686" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-821686
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-821686: (2.581056736s)
--- PASS: TestForceSystemdEnv (38.86s)

                                                
                                    
x
+
TestErrorSpam/setup (32.58s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-445666 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-445666 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-445666 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-445666 --driver=docker  --container-runtime=crio: (32.583752951s)
--- PASS: TestErrorSpam/setup (32.58s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 status
--- PASS: TestErrorSpam/status (1.16s)

                                                
                                    
x
+
TestErrorSpam/pause (6.76s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 pause: exit status 80 (2.511041592s)

                                                
                                                
-- stdout --
	* Pausing node nospam-445666 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:20:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 pause: exit status 80 (2.079346073s)

                                                
                                                
-- stdout --
	* Pausing node nospam-445666 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:20:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 pause: exit status 80 (2.169852618s)

                                                
                                                
-- stdout --
	* Pausing node nospam-445666 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:20:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.76s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.27s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 unpause: exit status 80 (1.33082333s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-445666 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:20:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 unpause: exit status 80 (1.820338524s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-445666 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:20:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 unpause: exit status 80 (2.120496103s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-445666 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:20:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.27s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 stop: (1.313629026s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-445666 --log_dir /tmp/nospam-445666 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21772-292654/.minikube/files/etc/test/nested/copy/294518/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.04s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-970848 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1019 12:21:45.935939  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:21:45.942336  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:21:45.953790  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:21:45.975192  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:21:46.016605  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:21:46.098146  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:21:46.260075  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:21:46.581383  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:21:47.223473  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:21:48.506898  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:21:51.068170  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:21:56.189762  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:22:06.432045  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-970848 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m22.035465391s)
--- PASS: TestFunctional/serial/StartWithProxy (82.04s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.94s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1019 12:22:21.144157  294518 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-970848 --alsologtostderr -v=8
E1019 12:22:26.913675  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-970848 --alsologtostderr -v=8: (27.936417406s)
functional_test.go:678: soft start took 27.939372221s for "functional-970848" cluster.
I1019 12:22:49.080910  294518 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (27.94s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-970848 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-970848 cache add registry.k8s.io/pause:3.1: (1.210392354s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-970848 cache add registry.k8s.io/pause:3.3: (1.210191752s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-970848 cache add registry.k8s.io/pause:latest: (1.205636454s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-970848 /tmp/TestFunctionalserialCacheCmdcacheadd_local2034007517/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 cache add minikube-local-cache-test:functional-970848
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 cache delete minikube-local-cache-test:functional-970848
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-970848
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970848 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (288.225341ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 kubectl -- --context functional-970848 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-970848 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-970848 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1019 12:23:07.875815  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-970848 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.077192578s)
functional_test.go:776: restart took 36.077291034s for "functional-970848" cluster.
I1019 12:23:32.734854  294518 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (36.08s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-970848 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-970848 logs: (1.466880436s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 logs --file /tmp/TestFunctionalserialLogsFileCmd3171043562/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-970848 logs --file /tmp/TestFunctionalserialLogsFileCmd3171043562/001/logs.txt: (1.504943336s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-970848 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-970848
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-970848: exit status 115 (387.590514ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32276 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-970848 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970848 config get cpus: exit status 14 (80.620964ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970848 config get cpus: exit status 14 (70.825273ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-970848 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-970848 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 322206: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-970848 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-970848 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (210.788721ms)

                                                
                                                
-- stdout --
	* [functional-970848] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:34:16.177874  321691 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:34:16.178060  321691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:34:16.178091  321691 out.go:374] Setting ErrFile to fd 2...
	I1019 12:34:16.178113  321691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:34:16.178383  321691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:34:16.178768  321691 out.go:368] Setting JSON to false
	I1019 12:34:16.179715  321691 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8207,"bootTime":1760869050,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 12:34:16.179818  321691 start.go:141] virtualization:  
	I1019 12:34:16.183383  321691 out.go:179] * [functional-970848] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 12:34:16.186567  321691 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:34:16.186641  321691 notify.go:220] Checking for updates...
	I1019 12:34:16.192590  321691 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:34:16.195610  321691 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 12:34:16.198508  321691 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 12:34:16.201396  321691 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 12:34:16.204268  321691 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:34:16.207580  321691 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:34:16.208162  321691 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:34:16.250353  321691 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 12:34:16.250470  321691 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:34:16.318805  321691 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 12:34:16.307492169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 12:34:16.318908  321691 docker.go:318] overlay module found
	I1019 12:34:16.321966  321691 out.go:179] * Using the docker driver based on existing profile
	I1019 12:34:16.324944  321691 start.go:305] selected driver: docker
	I1019 12:34:16.324969  321691 start.go:925] validating driver "docker" against &{Name:functional-970848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-970848 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:34:16.325064  321691 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:34:16.329559  321691 out.go:203] 
	W1019 12:34:16.333010  321691 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1019 12:34:16.335958  321691 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-970848 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-970848 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-970848 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (220.624256ms)

                                                
                                                
-- stdout --
	* [functional-970848] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:34:15.968190  321645 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:34:15.968408  321645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:34:15.968441  321645 out.go:374] Setting ErrFile to fd 2...
	I1019 12:34:15.968461  321645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:34:15.969649  321645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:34:15.970102  321645 out.go:368] Setting JSON to false
	I1019 12:34:15.971046  321645 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8206,"bootTime":1760869050,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 12:34:15.971147  321645 start.go:141] virtualization:  
	I1019 12:34:15.975099  321645 out.go:179] * [functional-970848] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1019 12:34:15.979151  321645 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:34:15.979273  321645 notify.go:220] Checking for updates...
	I1019 12:34:15.985035  321645 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:34:15.987922  321645 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 12:34:15.990740  321645 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 12:34:15.993726  321645 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 12:34:15.996623  321645 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:34:16.011779  321645 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:34:16.012441  321645 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:34:16.037574  321645 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 12:34:16.037802  321645 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:34:16.106478  321645 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 12:34:16.096550132 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 12:34:16.106590  321645 docker.go:318] overlay module found
	I1019 12:34:16.111526  321645 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1019 12:34:16.114384  321645 start.go:305] selected driver: docker
	I1019 12:34:16.114406  321645 start.go:925] validating driver "docker" against &{Name:functional-970848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-970848 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:34:16.114526  321645 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:34:16.118167  321645 out.go:203] 
	W1019 12:34:16.121047  321645 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1019 12:34:16.124615  321645 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [c2a46477-d3c8-4ad4-b914-907210b2389c] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003414735s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-970848 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-970848 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-970848 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-970848 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0730b87b-1627-4f6a-a63a-d465eb238fff] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [0730b87b-1627-4f6a-a63a-d465eb238fff] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003198938s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-970848 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-970848 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-970848 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8272210e-1c98-423c-9212-ec67c3e19d30] Pending
helpers_test.go:352: "sp-pod" [8272210e-1c98-423c-9212-ec67c3e19d30] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8272210e-1c98-423c-9212-ec67c3e19d30] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003041127s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-970848 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.58s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh -n functional-970848 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 cp functional-970848:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4109001472/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh -n functional-970848 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh -n functional-970848 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/294518/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "sudo cat /etc/test/nested/copy/294518/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/294518.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "sudo cat /etc/ssl/certs/294518.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/294518.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "sudo cat /usr/share/ca-certificates/294518.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2945182.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "sudo cat /etc/ssl/certs/2945182.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2945182.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "sudo cat /usr/share/ca-certificates/2945182.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-970848 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970848 ssh "sudo systemctl is-active docker": exit status 1 (362.922353ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970848 ssh "sudo systemctl is-active containerd": exit status 1 (337.109346ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-970848 version -o=json --components: (1.287743963s)
--- PASS: TestFunctional/parallel/Version/components (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-arm64 -p functional-970848 image ls --format short --alsologtostderr: (1.657376649s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-970848 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-970848 image ls --format short --alsologtostderr:
I1019 12:34:24.176475  323033 out.go:360] Setting OutFile to fd 1 ...
I1019 12:34:24.176624  323033 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:34:24.176631  323033 out.go:374] Setting ErrFile to fd 2...
I1019 12:34:24.176635  323033 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:34:24.176899  323033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
I1019 12:34:24.177497  323033 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:34:24.177600  323033 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:34:24.178117  323033 cli_runner.go:164] Run: docker container inspect functional-970848 --format={{.State.Status}}
I1019 12:34:24.203525  323033 ssh_runner.go:195] Run: systemctl --version
I1019 12:34:24.203600  323033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970848
I1019 12:34:24.226666  323033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/functional-970848/id_rsa Username:docker}
I1019 12:34:24.336730  323033 ssh_runner.go:195] Run: sudo crictl images --output json
I1019 12:34:25.764220  323033 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.427450978s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-970848 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/library/nginx                 │ latest             │ e35ad067421cc │ 184MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-970848 image ls --format table --alsologtostderr:
I1019 12:34:28.335012  323324 out.go:360] Setting OutFile to fd 1 ...
I1019 12:34:28.335138  323324 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:34:28.335150  323324 out.go:374] Setting ErrFile to fd 2...
I1019 12:34:28.335155  323324 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:34:28.336057  323324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
I1019 12:34:28.336716  323324 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:34:28.336837  323324 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:34:28.337356  323324 cli_runner.go:164] Run: docker container inspect functional-970848 --format={{.State.Status}}
I1019 12:34:28.358656  323324 ssh_runner.go:195] Run: systemctl --version
I1019 12:34:28.358708  323324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970848
I1019 12:34:28.377134  323324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/functional-970848/id_rsa Username:docker}
I1019 12:34:28.480813  323324 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-970848 image ls --format json --alsologtostderr:
[{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9","repoDigests":["docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6","docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a"],"repoTags":["docker.io/library/ng
inx:latest"],"size":"184136558"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-miniku
be/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133
ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:
alpine"],"size":"54704654"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7
066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-970848 image ls --format json --alsologtostderr:
I1019 12:34:28.108418  323287 out.go:360] Setting OutFile to fd 1 ...
I1019 12:34:28.109002  323287 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:34:28.109042  323287 out.go:374] Setting ErrFile to fd 2...
I1019 12:34:28.109062  323287 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:34:28.109913  323287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
I1019 12:34:28.111027  323287 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:34:28.111176  323287 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:34:28.111865  323287 cli_runner.go:164] Run: docker container inspect functional-970848 --format={{.State.Status}}
I1019 12:34:28.132212  323287 ssh_runner.go:195] Run: systemctl --version
I1019 12:34:28.132267  323287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970848
I1019 12:34:28.149479  323287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/functional-970848/id_rsa Username:docker}
I1019 12:34:28.252272  323287 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-970848 image ls --format yaml --alsologtostderr:
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
- docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a
repoTags:
- docker.io/library/nginx:latest
size: "184136558"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-970848 image ls --format yaml --alsologtostderr:
I1019 12:34:25.819463  323079 out.go:360] Setting OutFile to fd 1 ...
I1019 12:34:25.819568  323079 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:34:25.819579  323079 out.go:374] Setting ErrFile to fd 2...
I1019 12:34:25.819584  323079 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:34:25.819843  323079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
I1019 12:34:25.820451  323079 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:34:25.820566  323079 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:34:25.821009  323079 cli_runner.go:164] Run: docker container inspect functional-970848 --format={{.State.Status}}
I1019 12:34:25.858236  323079 ssh_runner.go:195] Run: systemctl --version
I1019 12:34:25.858322  323079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970848
I1019 12:34:25.886688  323079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/functional-970848/id_rsa Username:docker}
I1019 12:34:26.014421  323079 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970848 ssh pgrep buildkitd: exit status 1 (410.481926ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 image build -t localhost/my-image:functional-970848 testdata/build --alsologtostderr
2025/10/19 12:34:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-970848 image build -t localhost/my-image:functional-970848 testdata/build --alsologtostderr: (3.885866037s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-970848 image build -t localhost/my-image:functional-970848 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 5b6eef609cb
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-970848
--> 189018defdb
Successfully tagged localhost/my-image:functional-970848
189018defdbc312c78a8679ea83d13e7a73ac533b9dd118e44ba5417476a6744
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-970848 image build -t localhost/my-image:functional-970848 testdata/build --alsologtostderr:
I1019 12:34:26.548929  323201 out.go:360] Setting OutFile to fd 1 ...
I1019 12:34:26.549765  323201 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:34:26.549809  323201 out.go:374] Setting ErrFile to fd 2...
I1019 12:34:26.549834  323201 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:34:26.550146  323201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
I1019 12:34:26.550862  323201 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:34:26.551504  323201 config.go:182] Loaded profile config "functional-970848": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:34:26.551966  323201 cli_runner.go:164] Run: docker container inspect functional-970848 --format={{.State.Status}}
I1019 12:34:26.578056  323201 ssh_runner.go:195] Run: systemctl --version
I1019 12:34:26.578115  323201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970848
I1019 12:34:26.619353  323201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/functional-970848/id_rsa Username:docker}
I1019 12:34:26.729168  323201 build_images.go:161] Building image from path: /tmp/build.2264611609.tar
I1019 12:34:26.729240  323201 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1019 12:34:26.755413  323201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2264611609.tar
I1019 12:34:26.763291  323201 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2264611609.tar: stat -c "%s %y" /var/lib/minikube/build/build.2264611609.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2264611609.tar': No such file or directory
I1019 12:34:26.763331  323201 ssh_runner.go:362] scp /tmp/build.2264611609.tar --> /var/lib/minikube/build/build.2264611609.tar (3072 bytes)
I1019 12:34:26.798132  323201 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2264611609
I1019 12:34:26.811033  323201 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2264611609 -xf /var/lib/minikube/build/build.2264611609.tar
I1019 12:34:26.821873  323201 crio.go:315] Building image: /var/lib/minikube/build/build.2264611609
I1019 12:34:26.821953  323201 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-970848 /var/lib/minikube/build/build.2264611609 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1019 12:34:30.332628  323201 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-970848 /var/lib/minikube/build/build.2264611609 --cgroup-manager=cgroupfs: (3.510638648s)
I1019 12:34:30.332690  323201 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2264611609
I1019 12:34:30.340578  323201 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2264611609.tar
I1019 12:34:30.348944  323201 build_images.go:217] Built localhost/my-image:functional-970848 from /tmp/build.2264611609.tar
I1019 12:34:30.348978  323201 build_images.go:133] succeeded building to: functional-970848
I1019 12:34:30.348984  323201 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-970848
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-970848 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-970848 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-970848 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-970848 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 317438: os: process already finished
helpers_test.go:525: unable to kill pid 317255: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-970848 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-970848 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [24bda399-1128-431d-a262-535ccb314396] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [24bda399-1128-431d-a262-535ccb314396] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003660756s
I1019 12:23:54.312147  294518 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 image rm kicbase/echo-server:functional-970848 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-970848 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.159.183 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-970848 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "369.924724ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "59.863152ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "366.459748ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "53.847323ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-970848 /tmp/TestFunctionalparallelMountCmdany-port3034983335/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760877242523742190" to /tmp/TestFunctionalparallelMountCmdany-port3034983335/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760877242523742190" to /tmp/TestFunctionalparallelMountCmdany-port3034983335/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760877242523742190" to /tmp/TestFunctionalparallelMountCmdany-port3034983335/001/test-1760877242523742190
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970848 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (353.539953ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 12:34:02.878218  294518 retry.go:31] will retry after 625.43108ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 19 12:34 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 19 12:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 19 12:34 test-1760877242523742190
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh cat /mount-9p/test-1760877242523742190
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-970848 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [3a0c0b3d-0f40-43fa-b8c4-6c75bf88eac8] Pending
helpers_test.go:352: "busybox-mount" [3a0c0b3d-0f40-43fa-b8c4-6c75bf88eac8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [3a0c0b3d-0f40-43fa-b8c4-6c75bf88eac8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [3a0c0b3d-0f40-43fa-b8c4-6c75bf88eac8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003913192s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-970848 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-970848 /tmp/TestFunctionalparallelMountCmdany-port3034983335/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-970848 /tmp/TestFunctionalparallelMountCmdspecific-port454930668/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970848 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (367.148893ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 12:34:10.974779  294518 retry.go:31] will retry after 611.337987ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-970848 /tmp/TestFunctionalparallelMountCmdspecific-port454930668/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970848 ssh "sudo umount -f /mount-9p": exit status 1 (284.911017ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-970848 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-970848 /tmp/TestFunctionalparallelMountCmdspecific-port454930668/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-970848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup151150049/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-970848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup151150049/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-970848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup151150049/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970848 ssh "findmnt -T" /mount1: exit status 1 (614.440883ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 12:34:13.251518  294518 retry.go:31] will retry after 661.2494ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-970848 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-970848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup151150049/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-970848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup151150049/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-970848 /tmp/TestFunctionalparallelMountCmdVerifyCleanup151150049/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-970848 service list -o json
functional_test.go:1504: Took "627.151689ms" to run "out/minikube-linux-arm64 -p functional-970848 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-970848
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-970848
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-970848
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (187.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1019 12:36:45.933055  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-874393 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m6.65978275s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (187.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-874393 kubectl -- rollout status deployment/busybox: (3.837253551s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- exec busybox-7b57f96db7-52d84 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- exec busybox-7b57f96db7-9xj8b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- exec busybox-7b57f96db7-fk997 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- exec busybox-7b57f96db7-52d84 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- exec busybox-7b57f96db7-9xj8b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- exec busybox-7b57f96db7-fk997 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- exec busybox-7b57f96db7-52d84 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- exec busybox-7b57f96db7-9xj8b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- exec busybox-7b57f96db7-fk997 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- exec busybox-7b57f96db7-52d84 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- exec busybox-7b57f96db7-52d84 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- exec busybox-7b57f96db7-9xj8b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- exec busybox-7b57f96db7-9xj8b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- exec busybox-7b57f96db7-fk997 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 kubectl -- exec busybox-7b57f96db7-fk997 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 node add --alsologtostderr -v 5
E1019 12:38:09.001164  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:38:43.891814  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:38:43.898472  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:38:43.910307  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:38:43.931757  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:38:43.973123  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:38:44.054626  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:38:44.216138  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:38:44.537796  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:38:45.186691  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:38:46.469373  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-874393 node add --alsologtostderr -v 5: (58.762039626s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-874393 status --alsologtostderr -v 5: (1.026306775s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-874393 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
E1019 12:38:49.030764  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.022502103s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-874393 status --output json --alsologtostderr -v 5: (1.053932519s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp testdata/cp-test.txt ha-874393:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp ha-874393:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile609672496/001/cp-test_ha-874393.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp ha-874393:/home/docker/cp-test.txt ha-874393-m02:/home/docker/cp-test_ha-874393_ha-874393-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m02 "sudo cat /home/docker/cp-test_ha-874393_ha-874393-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp ha-874393:/home/docker/cp-test.txt ha-874393-m03:/home/docker/cp-test_ha-874393_ha-874393-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393 "sudo cat /home/docker/cp-test.txt"
E1019 12:38:54.154192  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m03 "sudo cat /home/docker/cp-test_ha-874393_ha-874393-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp ha-874393:/home/docker/cp-test.txt ha-874393-m04:/home/docker/cp-test_ha-874393_ha-874393-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m04 "sudo cat /home/docker/cp-test_ha-874393_ha-874393-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp testdata/cp-test.txt ha-874393-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp ha-874393-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile609672496/001/cp-test_ha-874393-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp ha-874393-m02:/home/docker/cp-test.txt ha-874393:/home/docker/cp-test_ha-874393-m02_ha-874393.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393 "sudo cat /home/docker/cp-test_ha-874393-m02_ha-874393.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp ha-874393-m02:/home/docker/cp-test.txt ha-874393-m03:/home/docker/cp-test_ha-874393-m02_ha-874393-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m03 "sudo cat /home/docker/cp-test_ha-874393-m02_ha-874393-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp ha-874393-m02:/home/docker/cp-test.txt ha-874393-m04:/home/docker/cp-test_ha-874393-m02_ha-874393-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m04 "sudo cat /home/docker/cp-test_ha-874393-m02_ha-874393-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp testdata/cp-test.txt ha-874393-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp ha-874393-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile609672496/001/cp-test_ha-874393-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp ha-874393-m03:/home/docker/cp-test.txt ha-874393:/home/docker/cp-test_ha-874393-m03_ha-874393.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393 "sudo cat /home/docker/cp-test_ha-874393-m03_ha-874393.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp ha-874393-m03:/home/docker/cp-test.txt ha-874393-m02:/home/docker/cp-test_ha-874393-m03_ha-874393-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m02 "sudo cat /home/docker/cp-test_ha-874393-m03_ha-874393-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp ha-874393-m03:/home/docker/cp-test.txt ha-874393-m04:/home/docker/cp-test_ha-874393-m03_ha-874393-m04.txt
E1019 12:39:04.395684  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m04 "sudo cat /home/docker/cp-test_ha-874393-m03_ha-874393-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp testdata/cp-test.txt ha-874393-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp ha-874393-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile609672496/001/cp-test_ha-874393-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp ha-874393-m04:/home/docker/cp-test.txt ha-874393:/home/docker/cp-test_ha-874393-m04_ha-874393.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393 "sudo cat /home/docker/cp-test_ha-874393-m04_ha-874393.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp ha-874393-m04:/home/docker/cp-test.txt ha-874393-m02:/home/docker/cp-test_ha-874393-m04_ha-874393-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m02 "sudo cat /home/docker/cp-test_ha-874393-m04_ha-874393-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 cp ha-874393-m04:/home/docker/cp-test.txt ha-874393-m03:/home/docker/cp-test_ha-874393-m04_ha-874393-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 ssh -n ha-874393-m03 "sudo cat /home/docker/cp-test_ha-874393-m04_ha-874393-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-874393 node stop m02 --alsologtostderr -v 5: (12.042665065s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-874393 status --alsologtostderr -v 5: exit status 7 (804.273623ms)

                                                
                                                
-- stdout --
	ha-874393
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-874393-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-874393-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-874393-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:39:22.460413  338231 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:39:22.460626  338231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:39:22.460634  338231 out.go:374] Setting ErrFile to fd 2...
	I1019 12:39:22.460640  338231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:39:22.460932  338231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:39:22.461123  338231 out.go:368] Setting JSON to false
	I1019 12:39:22.461144  338231 mustload.go:65] Loading cluster: ha-874393
	I1019 12:39:22.461651  338231 config.go:182] Loaded profile config "ha-874393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:39:22.461667  338231 status.go:174] checking status of ha-874393 ...
	I1019 12:39:22.462046  338231 notify.go:220] Checking for updates...
	I1019 12:39:22.465381  338231 cli_runner.go:164] Run: docker container inspect ha-874393 --format={{.State.Status}}
	I1019 12:39:22.486978  338231 status.go:371] ha-874393 host status = "Running" (err=<nil>)
	I1019 12:39:22.487005  338231 host.go:66] Checking if "ha-874393" exists ...
	I1019 12:39:22.487390  338231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-874393
	I1019 12:39:22.524922  338231 host.go:66] Checking if "ha-874393" exists ...
	I1019 12:39:22.525275  338231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:39:22.525324  338231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-874393
	I1019 12:39:22.549204  338231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/ha-874393/id_rsa Username:docker}
	I1019 12:39:22.655180  338231 ssh_runner.go:195] Run: systemctl --version
	I1019 12:39:22.662004  338231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:39:22.675899  338231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:39:22.758458  338231 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-19 12:39:22.743992958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 12:39:22.759055  338231 kubeconfig.go:125] found "ha-874393" server: "https://192.168.49.254:8443"
	I1019 12:39:22.759094  338231 api_server.go:166] Checking apiserver status ...
	I1019 12:39:22.759137  338231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:39:22.771917  338231 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup
	I1019 12:39:22.782432  338231 api_server.go:182] apiserver freezer: "4:freezer:/docker/52041ad39e938600fc29e29270afa5a492134e90e4dcc903f788ee3f271fe556/crio/crio-638bf66fb0ae7bec33d750d61b6f2ee855d581f46571d4bacb498aea6f0dd21e"
	I1019 12:39:22.782508  338231 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/52041ad39e938600fc29e29270afa5a492134e90e4dcc903f788ee3f271fe556/crio/crio-638bf66fb0ae7bec33d750d61b6f2ee855d581f46571d4bacb498aea6f0dd21e/freezer.state
	I1019 12:39:22.791635  338231 api_server.go:204] freezer state: "THAWED"
	I1019 12:39:22.791678  338231 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1019 12:39:22.800282  338231 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1019 12:39:22.800314  338231 status.go:463] ha-874393 apiserver status = Running (err=<nil>)
	I1019 12:39:22.800333  338231 status.go:176] ha-874393 status: &{Name:ha-874393 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:39:22.800367  338231 status.go:174] checking status of ha-874393-m02 ...
	I1019 12:39:22.800732  338231 cli_runner.go:164] Run: docker container inspect ha-874393-m02 --format={{.State.Status}}
	I1019 12:39:22.820751  338231 status.go:371] ha-874393-m02 host status = "Stopped" (err=<nil>)
	I1019 12:39:22.820775  338231 status.go:384] host is not running, skipping remaining checks
	I1019 12:39:22.820782  338231 status.go:176] ha-874393-m02 status: &{Name:ha-874393-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:39:22.820803  338231 status.go:174] checking status of ha-874393-m03 ...
	I1019 12:39:22.821128  338231 cli_runner.go:164] Run: docker container inspect ha-874393-m03 --format={{.State.Status}}
	I1019 12:39:22.839115  338231 status.go:371] ha-874393-m03 host status = "Running" (err=<nil>)
	I1019 12:39:22.839146  338231 host.go:66] Checking if "ha-874393-m03" exists ...
	I1019 12:39:22.839466  338231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-874393-m03
	I1019 12:39:22.857008  338231 host.go:66] Checking if "ha-874393-m03" exists ...
	I1019 12:39:22.857384  338231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:39:22.857441  338231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-874393-m03
	I1019 12:39:22.874205  338231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/ha-874393-m03/id_rsa Username:docker}
	I1019 12:39:22.979160  338231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:39:22.993440  338231 kubeconfig.go:125] found "ha-874393" server: "https://192.168.49.254:8443"
	I1019 12:39:22.993475  338231 api_server.go:166] Checking apiserver status ...
	I1019 12:39:22.993517  338231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:39:23.006808  338231 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	I1019 12:39:23.016330  338231 api_server.go:182] apiserver freezer: "4:freezer:/docker/9ba056c04940574fd3a341a4f5e21f583a0a0d0af196ea870464ab0a509a12a8/crio/crio-deefe512360c2ff8d4f7b3d219a798fe9812460eda8a4dd431be192e90f1309c"
	I1019 12:39:23.016424  338231 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9ba056c04940574fd3a341a4f5e21f583a0a0d0af196ea870464ab0a509a12a8/crio/crio-deefe512360c2ff8d4f7b3d219a798fe9812460eda8a4dd431be192e90f1309c/freezer.state
	I1019 12:39:23.024441  338231 api_server.go:204] freezer state: "THAWED"
	I1019 12:39:23.024470  338231 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1019 12:39:23.032763  338231 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1019 12:39:23.032789  338231 status.go:463] ha-874393-m03 apiserver status = Running (err=<nil>)
	I1019 12:39:23.032799  338231 status.go:176] ha-874393-m03 status: &{Name:ha-874393-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:39:23.032817  338231 status.go:174] checking status of ha-874393-m04 ...
	I1019 12:39:23.033129  338231 cli_runner.go:164] Run: docker container inspect ha-874393-m04 --format={{.State.Status}}
	I1019 12:39:23.050705  338231 status.go:371] ha-874393-m04 host status = "Running" (err=<nil>)
	I1019 12:39:23.050741  338231 host.go:66] Checking if "ha-874393-m04" exists ...
	I1019 12:39:23.051044  338231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-874393-m04
	I1019 12:39:23.069213  338231 host.go:66] Checking if "ha-874393-m04" exists ...
	I1019 12:39:23.069521  338231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:39:23.069568  338231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-874393-m04
	I1019 12:39:23.088126  338231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/ha-874393-m04/id_rsa Username:docker}
	I1019 12:39:23.191049  338231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:39:23.204611  338231 status.go:176] ha-874393-m04 status: &{Name:ha-874393-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (30.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 node start m02 --alsologtostderr -v 5
E1019 12:39:24.877212  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-874393 node start m02 --alsologtostderr -v 5: (28.835578469s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-874393 status --alsologtostderr -v 5: (1.34541306s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (30.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.466953833s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (134.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 stop --alsologtostderr -v 5
E1019 12:40:05.839075  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-874393 stop --alsologtostderr -v 5: (37.416938376s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 start --wait true --alsologtostderr -v 5
E1019 12:41:27.761187  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:41:45.933378  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-874393 start --wait true --alsologtostderr -v 5: (1m37.302548026s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (134.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-874393 node delete m03 --alsologtostderr -v 5: (11.038573931s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-874393 stop --alsologtostderr -v 5: (35.936727524s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-874393 status --alsologtostderr -v 5: exit status 7 (114.20813ms)

                                                
                                                
-- stdout --
	ha-874393
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-874393-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-874393-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:42:59.567139  350167 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:42:59.567275  350167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:42:59.567286  350167 out.go:374] Setting ErrFile to fd 2...
	I1019 12:42:59.567292  350167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:42:59.567541  350167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:42:59.567781  350167 out.go:368] Setting JSON to false
	I1019 12:42:59.567839  350167 mustload.go:65] Loading cluster: ha-874393
	I1019 12:42:59.567910  350167 notify.go:220] Checking for updates...
	I1019 12:42:59.568813  350167 config.go:182] Loaded profile config "ha-874393": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:42:59.568834  350167 status.go:174] checking status of ha-874393 ...
	I1019 12:42:59.569428  350167 cli_runner.go:164] Run: docker container inspect ha-874393 --format={{.State.Status}}
	I1019 12:42:59.587969  350167 status.go:371] ha-874393 host status = "Stopped" (err=<nil>)
	I1019 12:42:59.587994  350167 status.go:384] host is not running, skipping remaining checks
	I1019 12:42:59.588001  350167 status.go:176] ha-874393 status: &{Name:ha-874393 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:42:59.588030  350167 status.go:174] checking status of ha-874393-m02 ...
	I1019 12:42:59.588397  350167 cli_runner.go:164] Run: docker container inspect ha-874393-m02 --format={{.State.Status}}
	I1019 12:42:59.607220  350167 status.go:371] ha-874393-m02 host status = "Stopped" (err=<nil>)
	I1019 12:42:59.607251  350167 status.go:384] host is not running, skipping remaining checks
	I1019 12:42:59.607257  350167 status.go:176] ha-874393-m02 status: &{Name:ha-874393-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:42:59.607276  350167 status.go:174] checking status of ha-874393-m04 ...
	I1019 12:42:59.607555  350167 cli_runner.go:164] Run: docker container inspect ha-874393-m04 --format={{.State.Status}}
	I1019 12:42:59.627008  350167 status.go:371] ha-874393-m04 host status = "Stopped" (err=<nil>)
	I1019 12:42:59.627035  350167 status.go:384] host is not running, skipping remaining checks
	I1019 12:42:59.627043  350167 status.go:176] ha-874393-m04 status: &{Name:ha-874393-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (64.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1019 12:43:43.890884  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-874393 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m3.805388298s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (64.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.037730175s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 node add --control-plane --alsologtostderr -v 5
E1019 12:44:11.603442  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-874393 node add --control-plane --alsologtostderr -v 5: (1m19.845750872s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-874393 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-874393 status --alsologtostderr -v 5: (1.075135896s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.094933483s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-919847 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1019 12:46:45.933122  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-919847 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m22.963370684s)
--- PASS: TestJSONOutput/start/Command (82.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-919847 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-919847 --output=json --user=testUser: (5.834689044s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-122817 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-122817 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (92.458907ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f47ab619-0e24-4894-bcd9-2b910dc0bf36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-122817] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"99bb0a75-9977-474a-a9bd-5e26e9a78403","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21772"}}
	{"specversion":"1.0","id":"833429b8-4338-469d-b8bd-26cab9f1cf7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6337db12-e94b-47fd-9148-dbe29d06cc75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig"}}
	{"specversion":"1.0","id":"cb730046-154d-4f89-88ee-268bb0ab8c4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube"}}
	{"specversion":"1.0","id":"33c51269-b311-430d-865f-f8ce644adcd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a9c9adfb-3c4e-4ec3-9346-d8e5865f7f5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5c8d3df2-5cf7-4062-9993-1c37ff7562e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-122817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-122817
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.38s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-112610 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-112610 --network=: (39.218677578s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-112610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-112610
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-112610: (2.144908301s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.38s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.38s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-299975 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-299975 --network=bridge: (35.257777859s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-299975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-299975
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-299975: (2.087493988s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.38s)

                                                
                                    
x
+
TestKicExistingNetwork (38.93s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1019 12:48:32.265859  294518 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1019 12:48:32.281353  294518 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1019 12:48:32.282083  294518 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1019 12:48:32.282102  294518 cli_runner.go:164] Run: docker network inspect existing-network
W1019 12:48:32.296661  294518 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1019 12:48:32.296693  294518 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1019 12:48:32.296716  294518 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1019 12:48:32.296823  294518 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1019 12:48:32.312741  294518 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-319c97358c5c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:2a:99:c3:44:12:51} reservation:<nil>}
I1019 12:48:32.318567  294518 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1019 12:48:32.318932  294518 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400165c9f0}
I1019 12:48:32.319564  294518 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I1019 12:48:32.319640  294518 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1019 12:48:32.386058  294518 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-087874 --network=existing-network
E1019 12:48:43.890801  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-087874 --network=existing-network: (36.670151026s)
helpers_test.go:175: Cleaning up "existing-network-087874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-087874
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-087874: (2.095881928s)
I1019 12:49:11.174868  294518 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (38.93s)

                                                
                                    
x
+
TestKicCustomSubnet (36.97s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-305436 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-305436 --subnet=192.168.60.0/24: (34.649158988s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-305436 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-305436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-305436
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-305436: (2.299433069s)
--- PASS: TestKicCustomSubnet (36.97s)

                                                
                                    
x
+
TestKicStaticIP (36.68s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-767813 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-767813 --static-ip=192.168.200.200: (34.268611983s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-767813 ip
helpers_test.go:175: Cleaning up "static-ip-767813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-767813
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-767813: (2.247205574s)
--- PASS: TestKicStaticIP (36.68s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (74.43s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-751380 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-751380 --driver=docker  --container-runtime=crio: (34.719078902s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-753718 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-753718 --driver=docker  --container-runtime=crio: (34.112170201s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-751380
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-753718
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-753718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-753718
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-753718: (2.147619212s)
helpers_test.go:175: Cleaning up "first-751380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-751380
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-751380: (2.038845988s)
--- PASS: TestMinikubeProfile (74.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-405355 --memory=3072 --mount-string /tmp/TestMountStartserial388791440/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1019 12:51:45.933010  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-405355 --memory=3072 --mount-string /tmp/TestMountStartserial388791440/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (9.167943664s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-405355 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-407119 --memory=3072 --mount-string /tmp/TestMountStartserial388791440/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-407119 --memory=3072 --mount-string /tmp/TestMountStartserial388791440/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.330322351s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-407119 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-405355 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-405355 --alsologtostderr -v=5: (1.709648606s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-407119 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-407119
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-407119: (1.281678662s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.79s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-407119
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-407119: (7.789616849s)
--- PASS: TestMountStart/serial/RestartStopped (8.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-407119 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (133.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-803391 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1019 12:53:43.890882  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-803391 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m13.342743722s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (133.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-803391 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-803391 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-803391 -- rollout status deployment/busybox: (3.193153682s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-803391 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-803391 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-803391 -- exec busybox-7b57f96db7-j668g -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-803391 -- exec busybox-7b57f96db7-j822s -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-803391 -- exec busybox-7b57f96db7-j668g -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-803391 -- exec busybox-7b57f96db7-j822s -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-803391 -- exec busybox-7b57f96db7-j668g -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-803391 -- exec busybox-7b57f96db7-j822s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.96s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-803391 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-803391 -- exec busybox-7b57f96db7-j668g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-803391 -- exec busybox-7b57f96db7-j668g -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-803391 -- exec busybox-7b57f96db7-j822s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-803391 -- exec busybox-7b57f96db7-j822s -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-803391 -v=5 --alsologtostderr
E1019 12:54:49.002958  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:55:06.965227  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-803391 -v=5 --alsologtostderr: (58.599118547s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.30s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-803391 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 cp testdata/cp-test.txt multinode-803391:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 ssh -n multinode-803391 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 cp multinode-803391:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile219853605/001/cp-test_multinode-803391.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 ssh -n multinode-803391 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 cp multinode-803391:/home/docker/cp-test.txt multinode-803391-m02:/home/docker/cp-test_multinode-803391_multinode-803391-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 ssh -n multinode-803391 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 ssh -n multinode-803391-m02 "sudo cat /home/docker/cp-test_multinode-803391_multinode-803391-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 cp multinode-803391:/home/docker/cp-test.txt multinode-803391-m03:/home/docker/cp-test_multinode-803391_multinode-803391-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 ssh -n multinode-803391 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 ssh -n multinode-803391-m03 "sudo cat /home/docker/cp-test_multinode-803391_multinode-803391-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 cp testdata/cp-test.txt multinode-803391-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 ssh -n multinode-803391-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 cp multinode-803391-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile219853605/001/cp-test_multinode-803391-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 ssh -n multinode-803391-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 cp multinode-803391-m02:/home/docker/cp-test.txt multinode-803391:/home/docker/cp-test_multinode-803391-m02_multinode-803391.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 ssh -n multinode-803391-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 ssh -n multinode-803391 "sudo cat /home/docker/cp-test_multinode-803391-m02_multinode-803391.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 cp multinode-803391-m02:/home/docker/cp-test.txt multinode-803391-m03:/home/docker/cp-test_multinode-803391-m02_multinode-803391-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 ssh -n multinode-803391-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 ssh -n multinode-803391-m03 "sudo cat /home/docker/cp-test_multinode-803391-m02_multinode-803391-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 cp testdata/cp-test.txt multinode-803391-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 ssh -n multinode-803391-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 cp multinode-803391-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile219853605/001/cp-test_multinode-803391-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 ssh -n multinode-803391-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 cp multinode-803391-m03:/home/docker/cp-test.txt multinode-803391:/home/docker/cp-test_multinode-803391-m03_multinode-803391.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 ssh -n multinode-803391-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 ssh -n multinode-803391 "sudo cat /home/docker/cp-test_multinode-803391-m03_multinode-803391.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 cp multinode-803391-m03:/home/docker/cp-test.txt multinode-803391-m02:/home/docker/cp-test_multinode-803391-m03_multinode-803391-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 ssh -n multinode-803391-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 ssh -n multinode-803391-m02 "sudo cat /home/docker/cp-test_multinode-803391-m03_multinode-803391-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-803391 node stop m03: (1.332164238s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-803391 status: exit status 7 (533.876515ms)

                                                
                                                
-- stdout --
	multinode-803391
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-803391-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-803391-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-803391 status --alsologtostderr: exit status 7 (536.904391ms)

                                                
                                                
-- stdout --
	multinode-803391
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-803391-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-803391-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:55:42.797013  400628 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:55:42.797162  400628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:55:42.797190  400628 out.go:374] Setting ErrFile to fd 2...
	I1019 12:55:42.797197  400628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:55:42.797505  400628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:55:42.797761  400628 out.go:368] Setting JSON to false
	I1019 12:55:42.797820  400628 mustload.go:65] Loading cluster: multinode-803391
	I1019 12:55:42.797888  400628 notify.go:220] Checking for updates...
	I1019 12:55:42.798964  400628 config.go:182] Loaded profile config "multinode-803391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:55:42.798989  400628 status.go:174] checking status of multinode-803391 ...
	I1019 12:55:42.799596  400628 cli_runner.go:164] Run: docker container inspect multinode-803391 --format={{.State.Status}}
	I1019 12:55:42.819432  400628 status.go:371] multinode-803391 host status = "Running" (err=<nil>)
	I1019 12:55:42.819458  400628 host.go:66] Checking if "multinode-803391" exists ...
	I1019 12:55:42.819769  400628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-803391
	I1019 12:55:42.846215  400628 host.go:66] Checking if "multinode-803391" exists ...
	I1019 12:55:42.846513  400628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:55:42.846561  400628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-803391
	I1019 12:55:42.863996  400628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/multinode-803391/id_rsa Username:docker}
	I1019 12:55:42.967124  400628 ssh_runner.go:195] Run: systemctl --version
	I1019 12:55:42.973483  400628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:55:42.986213  400628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:55:43.053360  400628 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-19 12:55:43.043489445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 12:55:43.054085  400628 kubeconfig.go:125] found "multinode-803391" server: "https://192.168.58.2:8443"
	I1019 12:55:43.054121  400628 api_server.go:166] Checking apiserver status ...
	I1019 12:55:43.054170  400628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:55:43.067078  400628 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1231/cgroup
	I1019 12:55:43.075995  400628 api_server.go:182] apiserver freezer: "4:freezer:/docker/e0ce22bb9496051e5cf4e0dc69f052b48c51cc999420a76f9878ec0bc8c77c54/crio/crio-a1627c39e825c78e0185ba9f9c2b0ecb02bbf902088b03aa40274372af4dd234"
	I1019 12:55:43.076148  400628 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e0ce22bb9496051e5cf4e0dc69f052b48c51cc999420a76f9878ec0bc8c77c54/crio/crio-a1627c39e825c78e0185ba9f9c2b0ecb02bbf902088b03aa40274372af4dd234/freezer.state
	I1019 12:55:43.083869  400628 api_server.go:204] freezer state: "THAWED"
	I1019 12:55:43.083943  400628 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1019 12:55:43.092191  400628 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1019 12:55:43.092218  400628 status.go:463] multinode-803391 apiserver status = Running (err=<nil>)
	I1019 12:55:43.092229  400628 status.go:176] multinode-803391 status: &{Name:multinode-803391 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:55:43.092246  400628 status.go:174] checking status of multinode-803391-m02 ...
	I1019 12:55:43.092575  400628 cli_runner.go:164] Run: docker container inspect multinode-803391-m02 --format={{.State.Status}}
	I1019 12:55:43.111115  400628 status.go:371] multinode-803391-m02 host status = "Running" (err=<nil>)
	I1019 12:55:43.111143  400628 host.go:66] Checking if "multinode-803391-m02" exists ...
	I1019 12:55:43.111451  400628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-803391-m02
	I1019 12:55:43.129620  400628 host.go:66] Checking if "multinode-803391-m02" exists ...
	I1019 12:55:43.130016  400628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:55:43.130075  400628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-803391-m02
	I1019 12:55:43.147865  400628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21772-292654/.minikube/machines/multinode-803391-m02/id_rsa Username:docker}
	I1019 12:55:43.251229  400628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:55:43.264412  400628 status.go:176] multinode-803391-m02 status: &{Name:multinode-803391-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:55:43.264445  400628 status.go:174] checking status of multinode-803391-m03 ...
	I1019 12:55:43.264755  400628 cli_runner.go:164] Run: docker container inspect multinode-803391-m03 --format={{.State.Status}}
	I1019 12:55:43.282007  400628 status.go:371] multinode-803391-m03 host status = "Stopped" (err=<nil>)
	I1019 12:55:43.282028  400628 status.go:384] host is not running, skipping remaining checks
	I1019 12:55:43.282035  400628 status.go:176] multinode-803391-m03 status: &{Name:multinode-803391-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-803391 node start m03 -v=5 --alsologtostderr: (7.677736748s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (75.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-803391
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-803391
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-803391: (25.07431404s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-803391 --wait=true -v=5 --alsologtostderr
E1019 12:56:45.933560  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-803391 --wait=true -v=5 --alsologtostderr: (50.471556652s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-803391
--- PASS: TestMultiNode/serial/RestartKeepsNodes (75.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-803391 node delete m03: (4.970059705s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-803391 stop: (23.834800236s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-803391 status: exit status 7 (105.835501ms)

                                                
                                                
-- stdout --
	multinode-803391
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-803391-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-803391 status --alsologtostderr: exit status 7 (94.179789ms)

                                                
                                                
-- stdout --
	multinode-803391
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-803391-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:57:37.112520  408385 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:57:37.112690  408385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:57:37.112722  408385 out.go:374] Setting ErrFile to fd 2...
	I1019 12:57:37.112741  408385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:57:37.113050  408385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 12:57:37.113272  408385 out.go:368] Setting JSON to false
	I1019 12:57:37.113341  408385 mustload.go:65] Loading cluster: multinode-803391
	I1019 12:57:37.113411  408385 notify.go:220] Checking for updates...
	I1019 12:57:37.114677  408385 config.go:182] Loaded profile config "multinode-803391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:57:37.114728  408385 status.go:174] checking status of multinode-803391 ...
	I1019 12:57:37.115429  408385 cli_runner.go:164] Run: docker container inspect multinode-803391 --format={{.State.Status}}
	I1019 12:57:37.134150  408385 status.go:371] multinode-803391 host status = "Stopped" (err=<nil>)
	I1019 12:57:37.134171  408385 status.go:384] host is not running, skipping remaining checks
	I1019 12:57:37.134177  408385 status.go:176] multinode-803391 status: &{Name:multinode-803391 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:57:37.134202  408385 status.go:174] checking status of multinode-803391-m02 ...
	I1019 12:57:37.134520  408385 cli_runner.go:164] Run: docker container inspect multinode-803391-m02 --format={{.State.Status}}
	I1019 12:57:37.159385  408385 status.go:371] multinode-803391-m02 host status = "Stopped" (err=<nil>)
	I1019 12:57:37.159406  408385 status.go:384] host is not running, skipping remaining checks
	I1019 12:57:37.159413  408385 status.go:176] multinode-803391-m02 status: &{Name:multinode-803391-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-803391 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-803391 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (53.736498317s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-803391 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.47s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-803391
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-803391-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-803391-m02 --driver=docker  --container-runtime=crio: exit status 14 (88.753238ms)

                                                
                                                
-- stdout --
	* [multinode-803391-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-803391-m02' is duplicated with machine name 'multinode-803391-m02' in profile 'multinode-803391'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-803391-m03 --driver=docker  --container-runtime=crio
E1019 12:58:43.890939  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-803391-m03 --driver=docker  --container-runtime=crio: (36.102434136s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-803391
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-803391: exit status 80 (350.059816ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-803391 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-803391-m03 already exists in multinode-803391-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-803391-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-803391-m03: (2.07661106s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.67s)

                                                
                                    
x
+
TestPreload (127.17s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-774430 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-774430 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m3.793345505s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-774430 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-774430 image pull gcr.io/k8s-minikube/busybox: (2.217615249s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-774430
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-774430: (5.898252357s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-774430 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-774430 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (52.576077949s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-774430 image list
helpers_test.go:175: Cleaning up "test-preload-774430" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-774430
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-774430: (2.45476453s)
--- PASS: TestPreload (127.17s)

                                                
                                    
x
+
TestScheduledStopUnix (115.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-739112 --memory=3072 --driver=docker  --container-runtime=crio
E1019 13:01:45.933512  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-739112 --memory=3072 --driver=docker  --container-runtime=crio: (38.720071518s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-739112 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-739112 -n scheduled-stop-739112
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-739112 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1019 13:02:01.066470  294518 retry.go:31] will retry after 102.939µs: open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/scheduled-stop-739112/pid: no such file or directory
I1019 13:02:01.068516  294518 retry.go:31] will retry after 111.214µs: open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/scheduled-stop-739112/pid: no such file or directory
I1019 13:02:01.069997  294518 retry.go:31] will retry after 325.279µs: open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/scheduled-stop-739112/pid: no such file or directory
I1019 13:02:01.071135  294518 retry.go:31] will retry after 385.532µs: open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/scheduled-stop-739112/pid: no such file or directory
I1019 13:02:01.072263  294518 retry.go:31] will retry after 605.182µs: open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/scheduled-stop-739112/pid: no such file or directory
I1019 13:02:01.073394  294518 retry.go:31] will retry after 824.323µs: open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/scheduled-stop-739112/pid: no such file or directory
I1019 13:02:01.074529  294518 retry.go:31] will retry after 1.107148ms: open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/scheduled-stop-739112/pid: no such file or directory
I1019 13:02:01.076736  294518 retry.go:31] will retry after 2.034571ms: open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/scheduled-stop-739112/pid: no such file or directory
I1019 13:02:01.078880  294518 retry.go:31] will retry after 3.703025ms: open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/scheduled-stop-739112/pid: no such file or directory
I1019 13:02:01.083137  294518 retry.go:31] will retry after 3.955457ms: open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/scheduled-stop-739112/pid: no such file or directory
I1019 13:02:01.087362  294518 retry.go:31] will retry after 5.09792ms: open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/scheduled-stop-739112/pid: no such file or directory
I1019 13:02:01.093631  294518 retry.go:31] will retry after 6.299064ms: open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/scheduled-stop-739112/pid: no such file or directory
I1019 13:02:01.100858  294518 retry.go:31] will retry after 12.564222ms: open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/scheduled-stop-739112/pid: no such file or directory
I1019 13:02:01.114256  294518 retry.go:31] will retry after 19.726849ms: open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/scheduled-stop-739112/pid: no such file or directory
I1019 13:02:01.134517  294518 retry.go:31] will retry after 29.070644ms: open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/scheduled-stop-739112/pid: no such file or directory
I1019 13:02:01.163700  294518 retry.go:31] will retry after 22.14286ms: open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/scheduled-stop-739112/pid: no such file or directory
I1019 13:02:01.186945  294518 retry.go:31] will retry after 66.633651ms: open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/scheduled-stop-739112/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-739112 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-739112 -n scheduled-stop-739112
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-739112
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-739112 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-739112
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-739112: exit status 7 (72.128152ms)

                                                
                                                
-- stdout --
	scheduled-stop-739112
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-739112 -n scheduled-stop-739112
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-739112 -n scheduled-stop-739112: exit status 7 (73.612448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-739112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-739112
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-739112: (4.71482531s)
--- PASS: TestScheduledStopUnix (115.16s)

                                                
                                    
x
+
TestInsufficientStorage (13.81s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-126728 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-126728 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.194095264s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"aba3941d-7e41-472f-baca-52c5238d4f51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-126728] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"49587ece-ed70-47b0-a012-1a6ad81c2851","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21772"}}
	{"specversion":"1.0","id":"8a1d67f1-4890-4079-87a6-b8756e7581d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eefeccfb-4abf-4407-82f8-ee5b074057d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig"}}
	{"specversion":"1.0","id":"47ee0038-c4e5-45a3-b160-8246e17afca0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube"}}
	{"specversion":"1.0","id":"6c306549-3fcd-464b-9455-3bb7563a1bc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8932d639-513d-442d-b07a-980d6047b2d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"afb9e54b-dc4c-40af-a9b0-1f2182b29d56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0286f056-2d62-42d1-a7d3-6be1c25c1775","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2a94c9b7-7119-4bb9-9ec8-9d9af7452177","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e29277ee-eb36-4c66-b240-93806d534531","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"dc558256-1528-4e7c-bb4a-57f8cc4a35d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-126728\" primary control-plane node in \"insufficient-storage-126728\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d130593b-f976-4a36-afd7-61303faa5c4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2498294c-b33e-4913-99e3-2767292ab5a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"656f6297-d168-47b0-8365-891d296e1fd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-126728 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-126728 --output=json --layout=cluster: exit status 7 (314.496901ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-126728","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-126728","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1019 13:03:28.451313  424642 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-126728" does not appear in /home/jenkins/minikube-integration/21772-292654/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-126728 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-126728 --output=json --layout=cluster: exit status 7 (308.837479ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-126728","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-126728","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1019 13:03:28.761279  424709 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-126728" does not appear in /home/jenkins/minikube-integration/21772-292654/kubeconfig
	E1019 13:03:28.771443  424709 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/insufficient-storage-126728/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-126728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-126728
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-126728: (1.990756421s)
--- PASS: TestInsufficientStorage (13.81s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (54.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3762276104 start -p running-upgrade-495426 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3762276104 start -p running-upgrade-495426 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.138503122s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-495426 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-495426 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.014785926s)
helpers_test.go:175: Cleaning up "running-upgrade-495426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-495426
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-495426: (2.036370794s)
--- PASS: TestRunningBinaryUpgrade (54.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (351.48s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-104724 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-104724 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.937436068s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-104724
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-104724: (1.462019084s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-104724 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-104724 status --format={{.Host}}: exit status 7 (101.499315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-104724 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-104724 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m31.389764381s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-104724 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-104724 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-104724 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (99.764612ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-104724] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-104724
	    minikube start -p kubernetes-upgrade-104724 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1047242 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-104724 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-104724 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-104724 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.128349439s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-104724" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-104724
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-104724: (2.270108225s)
--- PASS: TestKubernetesUpgrade (351.48s)

                                                
                                    
x
+
TestMissingContainerUpgrade (113.13s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.4171573751 start -p missing-upgrade-754625 --memory=3072 --driver=docker  --container-runtime=crio
E1019 13:03:43.893858  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.4171573751 start -p missing-upgrade-754625 --memory=3072 --driver=docker  --container-runtime=crio: (1m0.946809643s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-754625
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-754625
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-754625 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-754625 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (48.20333413s)
helpers_test.go:175: Cleaning up "missing-upgrade-754625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-754625
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-754625: (2.350433571s)
--- PASS: TestMissingContainerUpgrade (113.13s)

                                                
                                    
x
+
TestPause/serial/Start (95.28s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-052658 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-052658 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m35.278977605s)
--- PASS: TestPause/serial/Start (95.28s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.98s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-052658 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-052658 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.95748292s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (64.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.307556370 start -p stopped-upgrade-456406 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.307556370 start -p stopped-upgrade-456406 --memory=3072 --vm-driver=docker  --container-runtime=crio: (43.466999059s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.307556370 -p stopped-upgrade-456406 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.307556370 -p stopped-upgrade-456406 stop: (1.257879017s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-456406 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1019 13:06:45.939013  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-456406 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.579924766s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (64.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-456406
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-456406: (1.243324825s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-016182 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-016182 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (105.63053ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-016182] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-016182 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1019 13:08:43.891163  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-016182 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.278457389s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-016182 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (12.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-016182 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-016182 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (10.140346674s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-016182 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-016182 status -o json: exit status 2 (407.656388ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-016182","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-016182
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-016182: (2.110876083s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (12.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-016182 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-016182 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.60827579s)
--- PASS: TestNoKubernetes/serial/Start (5.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-016182 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-016182 "sudo systemctl is-active --quiet service kubelet": exit status 1 (279.699355ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-016182
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-016182: (1.32238208s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-016182 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-016182 --driver=docker  --container-runtime=crio: (7.17293854s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-016182 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-016182 "sudo systemctl is-active --quiet service kubelet": exit status 1 (308.321167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-696007 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-696007 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (183.174482ms)

                                                
                                                
-- stdout --
	* [false-696007] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 13:09:38.527412  460417 out.go:360] Setting OutFile to fd 1 ...
	I1019 13:09:38.527551  460417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:09:38.527562  460417 out.go:374] Setting ErrFile to fd 2...
	I1019 13:09:38.527568  460417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 13:09:38.527824  460417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-292654/.minikube/bin
	I1019 13:09:38.528233  460417 out.go:368] Setting JSON to false
	I1019 13:09:38.529139  460417 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10329,"bootTime":1760869050,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1019 13:09:38.529268  460417 start.go:141] virtualization:  
	I1019 13:09:38.532880  460417 out.go:179] * [false-696007] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1019 13:09:38.536002  460417 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 13:09:38.536060  460417 notify.go:220] Checking for updates...
	I1019 13:09:38.541953  460417 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 13:09:38.545001  460417 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-292654/kubeconfig
	I1019 13:09:38.548013  460417 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-292654/.minikube
	I1019 13:09:38.550911  460417 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1019 13:09:38.553818  460417 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 13:09:38.557182  460417 config.go:182] Loaded profile config "kubernetes-upgrade-104724": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 13:09:38.557323  460417 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 13:09:38.583028  460417 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1019 13:09:38.583157  460417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 13:09:38.643963  460417 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-19 13:09:38.634654205 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1019 13:09:38.644078  460417 docker.go:318] overlay module found
	I1019 13:09:38.647179  460417 out.go:179] * Using the docker driver based on user configuration
	I1019 13:09:38.649920  460417 start.go:305] selected driver: docker
	I1019 13:09:38.649940  460417 start.go:925] validating driver "docker" against <nil>
	I1019 13:09:38.649954  460417 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 13:09:38.653570  460417 out.go:203] 
	W1019 13:09:38.656471  460417 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1019 13:09:38.659293  460417 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-696007 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-696007

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-696007

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-696007

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-696007

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-696007

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-696007

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-696007

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-696007

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-696007

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-696007

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-696007

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-696007" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-696007" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 13:06:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-104724
contexts:
- context:
cluster: kubernetes-upgrade-104724
user: kubernetes-upgrade-104724
name: kubernetes-upgrade-104724
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-104724
user:
client-certificate: /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/client.crt
client-key: /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-696007

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-696007"

                                                
                                                
----------------------- debugLogs end: false-696007 [took: 3.432157487s] --------------------------------
helpers_test.go:175: Cleaning up "false-696007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-696007
--- PASS: TestNetworkPlugins/group/false (3.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (69.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-842494 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-842494 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m9.061127732s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (69.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-842494 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a8b3e381-a2c1-49ea-a27d-b299c312c182] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a8b3e381-a2c1-49ea-a27d-b299c312c182] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003485196s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-842494 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1019 13:13:43.890761  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m8.401616162s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-842494 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-842494 --alsologtostderr -v=3: (14.441171814s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-842494 -n old-k8s-version-842494
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-842494 -n old-k8s-version-842494: exit status 7 (77.531501ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-842494 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (61.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-842494 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-842494 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m1.034665678s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-842494 -n old-k8s-version-842494
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (61.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-108149 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [dc85de5e-425e-47b2-916e-f27d88458ea3] Pending
helpers_test.go:352: "busybox" [dc85de5e-425e-47b2-916e-f27d88458ea3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [dc85de5e-425e-47b2-916e-f27d88458ea3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.007744534s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-108149 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-108149 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-108149 --alsologtostderr -v=3: (12.108115569s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7m5tv" [9753fd7f-7e7b-4446-adf9-ab41cecf44d6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003577671s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7m5tv" [9753fd7f-7e7b-4446-adf9-ab41cecf44d6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005903131s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-842494 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-108149 -n no-preload-108149
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-108149 -n no-preload-108149: exit status 7 (88.581742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-108149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (61.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-108149 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m0.709664529s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-108149 -n no-preload-108149
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (61.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-842494 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m25.869136572s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8wvh6" [1e8b4000-201a-4e13-a3ec-4b0799d1f3cd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003501371s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8wvh6" [1e8b4000-201a-4e13-a3ec-4b0799d1f3cd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003730053s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-108149 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-108149 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1019 13:16:45.933342  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/addons-694780/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m19.893695727s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-834340 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6ad38544-fd49-4f9c-8c24-24f230946955] Pending
helpers_test.go:352: "busybox" [6ad38544-fd49-4f9c-8c24-24f230946955] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6ad38544-fd49-4f9c-8c24-24f230946955] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005106057s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-834340 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-834340 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-834340 --alsologtostderr -v=3: (12.213018581s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-834340 -n embed-certs-834340
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-834340 -n embed-certs-834340: exit status 7 (73.58247ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-834340 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-834340 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.784176446s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-834340 -n embed-certs-834340
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-455348 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [be6a9614-a438-46fe-8247-1f3e80f868a4] Pending
helpers_test.go:352: "busybox" [be6a9614-a438-46fe-8247-1f3e80f868a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [be6a9614-a438-46fe-8247-1f3e80f868a4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003800373s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-455348 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-455348 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-455348 --alsologtostderr -v=3: (12.005716884s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m9x8r" [86d791c0-5ed1-48b8-acec-70e583fc2449] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003660265s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-455348 -n default-k8s-diff-port-455348
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-455348 -n default-k8s-diff-port-455348: exit status 7 (71.932155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-455348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (61.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-455348 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m0.736886881s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-455348 -n default-k8s-diff-port-455348
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (61.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m9x8r" [86d791c0-5ed1-48b8-acec-70e583fc2449] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006568714s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-834340 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-834340 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-895642 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1019 13:18:41.109836  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/old-k8s-version-842494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:18:43.671327  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/old-k8s-version-842494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:18:43.891796  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/functional-970848/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:18:48.792796  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/old-k8s-version-842494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:18:59.034833  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/old-k8s-version-842494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:19:19.517919  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/old-k8s-version-842494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-895642 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.584779974s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tvbrn" [de0e40c0-c28e-49e9-b0a8-4b3f8aa746df] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003488952s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-895642 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-895642 --alsologtostderr -v=3: (1.352963921s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895642 -n newest-cni-895642
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895642 -n newest-cni-895642: exit status 7 (71.176127ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-895642 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-895642 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-895642 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (16.993129086s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-895642 -n newest-cni-895642
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tvbrn" [de0e40c0-c28e-49e9-b0a8-4b3f8aa746df] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003087228s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-455348 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-455348 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-895642 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (89.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-696007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-696007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m29.275970396s)
--- PASS: TestNetworkPlugins/group/auto/Start (89.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-696007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1019 13:19:55.458220  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:20:00.480936  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/old-k8s-version-842494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:20:00.584153  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:20:10.825606  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:20:31.307100  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:21:12.269215  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-696007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m25.425922181s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-696007 "pgrep -a kubelet"
I1019 13:21:16.787050  294518 config.go:182] Loaded profile config "auto-696007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-696007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kmr9m" [a92b36e7-e6e0-4dac-9abd-043cef80a181] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kmr9m" [a92b36e7-e6e0-4dac-9abd-043cef80a181] Running
E1019 13:21:22.402905  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/old-k8s-version-842494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003672136s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-slpmb" [16e5b5c8-7f71-43de-a6f1-c7c4649128ac] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004115693s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-696007 "pgrep -a kubelet"
I1019 13:21:26.546344  294518 config.go:182] Loaded profile config "kindnet-696007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-696007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2wb8v" [ef28216a-0565-40c9-bdaa-a4caf087c60d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2wb8v" [ef28216a-0565-40c9-bdaa-a4caf087c60d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004105824s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-696007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-696007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-696007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-696007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-696007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-696007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-696007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-696007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m16.544965077s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-696007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1019 13:22:34.191347  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:22:58.620719  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:22:58.627091  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:22:58.638471  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:22:58.659838  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:22:58.701215  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:22:58.782610  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:22:58.944050  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:22:59.265558  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:22:59.908075  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:23:01.189862  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:23:03.752555  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-696007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m8.905416524s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-9ngvg" [c0089628-5b57-4de4-975b-caaa837fd035] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1019 13:23:08.874325  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "calico-node-9ngvg" [c0089628-5b57-4de4-975b-caaa837fd035] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00354012s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-696007 "pgrep -a kubelet"
I1019 13:23:10.206969  294518 config.go:182] Loaded profile config "custom-flannel-696007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-696007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6hphm" [37204f1b-74a0-489f-9796-701ce104cbce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6hphm" [37204f1b-74a0-489f-9796-701ce104cbce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004266192s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-696007 "pgrep -a kubelet"
I1019 13:23:13.712602  294518 config.go:182] Loaded profile config "calico-696007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-696007 replace --force -f testdata/netcat-deployment.yaml
I1019 13:23:14.032692  294518 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-82p2f" [beb7ff6d-af63-477e-9859-a0e46045ed2a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-82p2f" [beb7ff6d-af63-477e-9859-a0e46045ed2a] Running
E1019 13:23:19.116511  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00398684s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-696007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-696007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-696007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-696007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-696007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-696007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-696007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-696007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m19.842395523s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-696007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1019 13:24:06.244441  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/old-k8s-version-842494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:24:20.559992  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 13:24:50.321473  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/no-preload-108149/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-696007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m4.90798165s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-6xh5s" [7990fb82-d427-4eb9-8664-cba9b023c632] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003008056s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-696007 "pgrep -a kubelet"
I1019 13:25:04.568078  294518 config.go:182] Loaded profile config "flannel-696007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-696007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4kgqm" [5a38d41c-36ff-4dec-ba7e-e2179fb34252] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4kgqm" [5a38d41c-36ff-4dec-ba7e-e2179fb34252] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003250375s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-696007 "pgrep -a kubelet"
I1019 13:25:08.760881  294518 config.go:182] Loaded profile config "enable-default-cni-696007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-696007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b6wzc" [9863761b-4bf1-40d3-9212-fae5179dbb07] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b6wzc" [9863761b-4bf1-40d3-9212-fae5179dbb07] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003961805s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-696007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-696007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-696007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-696007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-696007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-696007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (81.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-696007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1019 13:25:42.481356  294518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/default-k8s-diff-port-455348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-696007 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m21.364467126s)
--- PASS: TestNetworkPlugins/group/bridge/Start (81.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-696007 "pgrep -a kubelet"
I1019 13:27:03.609691  294518 config.go:182] Loaded profile config "bridge-696007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-696007 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bxrq5" [3436af8c-741c-4c21-8b5b-9d123518ebdc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bxrq5" [3436af8c-741c-4c21-8b5b-9d123518ebdc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003813001s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-696007 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-696007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-696007 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-107639 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-107639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-107639
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-418719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-418719
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-696007 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-696007

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-696007

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-696007

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-696007

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-696007

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-696007

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-696007

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-696007

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-696007

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-696007

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-696007

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-696007" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-696007" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 13:06:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-104724
contexts:
- context:
cluster: kubernetes-upgrade-104724
user: kubernetes-upgrade-104724
name: kubernetes-upgrade-104724
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-104724
user:
client-certificate: /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/client.crt
client-key: /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-696007

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-696007"

                                                
                                                
----------------------- debugLogs end: kubenet-696007 [took: 3.49644963s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-696007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-696007
--- SKIP: TestNetworkPlugins/group/kubenet (3.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-696007 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-696007

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-696007

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-696007

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-696007

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-696007

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-696007

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-696007

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-696007

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-696007

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-696007

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-696007

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-696007" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-696007

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-696007

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-696007

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-696007

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-696007" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-696007" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-292654/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 13:06:22 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-104724
contexts:
- context:
cluster: kubernetes-upgrade-104724
user: kubernetes-upgrade-104724
name: kubernetes-upgrade-104724
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-104724
user:
client-certificate: /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/client.crt
client-key: /home/jenkins/minikube-integration/21772-292654/.minikube/profiles/kubernetes-upgrade-104724/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-696007

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-696007" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-696007"

                                                
                                                
----------------------- debugLogs end: cilium-696007 [took: 4.038968872s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-696007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-696007
--- SKIP: TestNetworkPlugins/group/cilium (4.20s)

                                                
                                    
Copied to clipboard